00:00:00.001 Started by upstream project "autotest-per-patch" build number 132716 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.071 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.098 Fetching changes from the remote Git repository 00:00:00.100 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.140 Using shallow fetch with depth 1 00:00:00.140 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.140 > git --version # timeout=10 00:00:00.222 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.259 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.259 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.946 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.960 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.972 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.972 > git config core.sparsecheckout # timeout=10 00:00:05.983 > git read-tree -mu HEAD # timeout=10 00:00:05.999 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.025 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.025 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.166 [Pipeline] Start of Pipeline 00:00:06.183 [Pipeline] library 00:00:06.185 Loading library shm_lib@master 00:00:06.185 Library shm_lib@master is cached. Copying from home. 00:00:06.201 [Pipeline] node 00:00:06.212 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.214 [Pipeline] { 00:00:06.225 [Pipeline] catchError 00:00:06.227 [Pipeline] { 00:00:06.239 [Pipeline] wrap 00:00:06.245 [Pipeline] { 00:00:06.253 [Pipeline] stage 00:00:06.255 [Pipeline] { (Prologue) 00:00:06.268 [Pipeline] echo 00:00:06.269 Node: VM-host-SM17 00:00:06.274 [Pipeline] cleanWs 00:00:06.282 [WS-CLEANUP] Deleting project workspace... 00:00:06.282 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.287 [WS-CLEANUP] done 00:00:06.500 [Pipeline] setCustomBuildProperty 00:00:06.574 [Pipeline] httpRequest 00:00:07.364 [Pipeline] echo 00:00:07.366 Sorcerer 10.211.164.101 is alive 00:00:07.374 [Pipeline] retry 00:00:07.376 [Pipeline] { 00:00:07.391 [Pipeline] httpRequest 00:00:07.396 HttpMethod: GET 00:00:07.396 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.397 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.414 Response Code: HTTP/1.1 200 OK 00:00:07.415 Success: Status code 200 is in the accepted range: 200,404 00:00:07.416 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.711 [Pipeline] } 00:00:12.726 [Pipeline] // retry 00:00:12.731 [Pipeline] sh 00:00:13.010 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.026 [Pipeline] httpRequest 00:00:13.936 [Pipeline] echo 00:00:13.938 Sorcerer 10.211.164.101 is alive 00:00:13.949 [Pipeline] retry 00:00:13.951 [Pipeline] { 00:00:13.965 [Pipeline] httpRequest 00:00:13.969 HttpMethod: GET 00:00:13.970 URL: http://10.211.164.101/packages/spdk_eec61894813d3232c06044a3a6cd4dc2076c84bc.tar.gz 00:00:13.970 Sending request to url: http://10.211.164.101/packages/spdk_eec61894813d3232c06044a3a6cd4dc2076c84bc.tar.gz 00:00:13.995 Response Code: HTTP/1.1 200 OK 00:00:13.996 Success: Status code 200 is in the accepted range: 200,404 00:00:13.997 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_eec61894813d3232c06044a3a6cd4dc2076c84bc.tar.gz 00:02:21.681 [Pipeline] } 00:02:21.697 [Pipeline] // retry 00:02:21.704 [Pipeline] sh 00:02:21.982 + tar --no-same-owner -xf spdk_eec61894813d3232c06044a3a6cd4dc2076c84bc.tar.gz 00:02:24.525 [Pipeline] sh 00:02:24.804 + git -C spdk log --oneline -n5 00:02:24.804 eec618948 lib/reduce: Unmap backing dev blocks 00:02:24.804 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:24.804 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:24.804 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:24.804 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:24.828 [Pipeline] writeFile 00:02:24.846 [Pipeline] sh 00:02:25.131 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:25.143 [Pipeline] sh 00:02:25.424 + cat autorun-spdk.conf 00:02:25.424 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.424 SPDK_TEST_NVMF=1 00:02:25.424 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.424 SPDK_TEST_URING=1 00:02:25.424 SPDK_TEST_USDT=1 00:02:25.424 SPDK_RUN_UBSAN=1 00:02:25.424 NET_TYPE=virt 00:02:25.424 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.431 RUN_NIGHTLY=0 00:02:25.433 [Pipeline] } 00:02:25.447 [Pipeline] // stage 00:02:25.463 [Pipeline] stage 00:02:25.465 [Pipeline] { (Run VM) 00:02:25.478 [Pipeline] sh 00:02:25.760 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:25.760 + echo 'Start stage prepare_nvme.sh' 00:02:25.760 Start stage prepare_nvme.sh 00:02:25.760 + [[ -n 3 ]] 00:02:25.760 + disk_prefix=ex3 00:02:25.760 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:25.760 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:25.760 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:25.760 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.760 ++ SPDK_TEST_NVMF=1 00:02:25.760 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.760 ++ SPDK_TEST_URING=1 00:02:25.760 ++ SPDK_TEST_USDT=1 00:02:25.760 ++ SPDK_RUN_UBSAN=1 00:02:25.760 ++ NET_TYPE=virt 00:02:25.760 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.760 ++ RUN_NIGHTLY=0 00:02:25.760 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:25.760 + nvme_files=() 00:02:25.760 + declare -A nvme_files 00:02:25.760 + backend_dir=/var/lib/libvirt/images/backends 00:02:25.760 + nvme_files['nvme.img']=5G 00:02:25.760 + nvme_files['nvme-cmb.img']=5G 00:02:25.760 + nvme_files['nvme-multi0.img']=4G 00:02:25.760 + nvme_files['nvme-multi1.img']=4G 00:02:25.760 + nvme_files['nvme-multi2.img']=4G 00:02:25.760 + nvme_files['nvme-openstack.img']=8G 00:02:25.760 + nvme_files['nvme-zns.img']=5G 00:02:25.760 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:25.760 + (( SPDK_TEST_FTL == 1 )) 00:02:25.760 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:25.760 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:25.760 + for nvme in "${!nvme_files[@]}" 00:02:25.760 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:25.760 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:25.760 + for nvme in "${!nvme_files[@]}" 00:02:25.760 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:25.760 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:25.760 + for nvme in "${!nvme_files[@]}" 00:02:25.760 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:25.760 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:25.760 + for nvme in "${!nvme_files[@]}" 00:02:25.760 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:25.760 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:25.760 + for nvme in "${!nvme_files[@]}" 00:02:25.760 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:25.760 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:25.760 + for nvme in "${!nvme_files[@]}" 00:02:25.760 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:25.760 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:25.760 + for nvme in "${!nvme_files[@]}" 00:02:25.760 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:26.019 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:26.019 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:26.019 + echo 'End stage prepare_nvme.sh' 00:02:26.019 End stage prepare_nvme.sh 00:02:26.074 [Pipeline] sh 00:02:26.389 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:26.389 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:02:26.389 00:02:26.389 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:26.389 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:26.389 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:26.389 HELP=0 00:02:26.389 DRY_RUN=0 00:02:26.389 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:02:26.389 NVME_DISKS_TYPE=nvme,nvme, 00:02:26.389 NVME_AUTO_CREATE=0 00:02:26.389 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:02:26.389 NVME_CMB=,, 00:02:26.389 NVME_PMR=,, 00:02:26.389 NVME_ZNS=,, 00:02:26.389 NVME_MS=,, 00:02:26.389 NVME_FDP=,, 00:02:26.389 SPDK_VAGRANT_DISTRO=fedora39 00:02:26.389 SPDK_VAGRANT_VMCPU=10 00:02:26.389 SPDK_VAGRANT_VMRAM=12288 00:02:26.389 SPDK_VAGRANT_PROVIDER=libvirt 00:02:26.389 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:26.389 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:26.389 SPDK_OPENSTACK_NETWORK=0 00:02:26.389 VAGRANT_PACKAGE_BOX=0 00:02:26.389 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:26.389 FORCE_DISTRO=true 00:02:26.389 VAGRANT_BOX_VERSION= 00:02:26.389 EXTRA_VAGRANTFILES= 00:02:26.389 NIC_MODEL=e1000 00:02:26.389 00:02:26.389 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:26.389 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:29.674 Bringing machine 'default' up with 'libvirt' provider... 00:02:29.674 ==> default: Creating image (snapshot of base box volume). 00:02:29.933 ==> default: Creating domain with the following settings... 00:02:29.934 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733477994_4610a976364545a0c745 00:02:29.934 ==> default: -- Domain type: kvm 00:02:29.934 ==> default: -- Cpus: 10 00:02:29.934 ==> default: -- Feature: acpi 00:02:29.934 ==> default: -- Feature: apic 00:02:29.934 ==> default: -- Feature: pae 00:02:29.934 ==> default: -- Memory: 12288M 00:02:29.934 ==> default: -- Memory Backing: hugepages: 00:02:29.934 ==> default: -- Management MAC: 00:02:29.934 ==> default: -- Loader: 00:02:29.934 ==> default: -- Nvram: 00:02:29.934 ==> default: -- Base box: spdk/fedora39 00:02:29.934 ==> default: -- Storage pool: default 00:02:29.934 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733477994_4610a976364545a0c745.img (20G) 00:02:29.934 ==> default: -- Volume Cache: default 00:02:29.934 ==> default: -- Kernel: 00:02:29.934 ==> default: -- Initrd: 00:02:29.934 ==> default: -- Graphics Type: vnc 00:02:29.934 ==> default: -- Graphics Port: -1 00:02:29.934 ==> default: -- Graphics IP: 127.0.0.1 00:02:29.934 ==> default: -- Graphics Password: Not defined 00:02:29.934 ==> default: -- Video Type: cirrus 00:02:29.934 ==> default: -- Video VRAM: 9216 00:02:29.934 ==> default: -- Sound Type: 00:02:29.934 ==> default: -- Keymap: en-us 00:02:29.934 ==> default: -- TPM Path: 00:02:29.934 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:29.934 ==> default: -- Command line args: 00:02:29.934 ==> default: -> value=-device, 00:02:29.934 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:29.934 ==> default: -> value=-drive, 00:02:29.934 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:02:29.934 ==> default: -> value=-device, 00:02:29.934 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:29.934 ==> default: -> value=-device, 00:02:29.934 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:29.934 ==> default: -> value=-drive, 00:02:29.934 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:29.934 ==> default: -> value=-device, 00:02:29.934 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:29.934 ==> default: -> value=-drive, 00:02:29.934 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:29.934 ==> default: -> value=-device, 00:02:29.934 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:29.934 ==> default: -> value=-drive, 00:02:29.934 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:29.934 ==> default: -> value=-device, 00:02:29.934 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:29.934 ==> default: Creating shared folders metadata... 00:02:29.934 ==> default: Starting domain. 00:02:32.466 ==> default: Waiting for domain to get an IP address... 00:02:50.550 ==> default: Waiting for SSH to become available... 00:02:50.550 ==> default: Configuring and enabling network interfaces... 00:02:53.081 default: SSH address: 192.168.121.27:22 00:02:53.081 default: SSH username: vagrant 00:02:53.081 default: SSH auth method: private key 00:02:54.984 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:03.098 ==> default: Mounting SSHFS shared folder... 00:03:04.476 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:04.476 ==> default: Checking Mount.. 00:03:05.855 ==> default: Folder Successfully Mounted! 00:03:05.855 ==> default: Running provisioner: file... 00:03:06.793 default: ~/.gitconfig => .gitconfig 00:03:07.051 00:03:07.051 SUCCESS! 00:03:07.051 00:03:07.051 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:07.051 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:07.051 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:07.051 00:03:07.060 [Pipeline] } 00:03:07.077 [Pipeline] // stage 00:03:07.086 [Pipeline] dir 00:03:07.087 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:03:07.089 [Pipeline] { 00:03:07.102 [Pipeline] catchError 00:03:07.104 [Pipeline] { 00:03:07.117 [Pipeline] sh 00:03:07.395 + vagrant ssh-config --host vagrant 00:03:07.395 + sed -ne /^Host/,$p 00:03:07.395 + tee ssh_conf 00:03:10.687 Host vagrant 00:03:10.687 HostName 192.168.121.27 00:03:10.687 User vagrant 00:03:10.687 Port 22 00:03:10.687 UserKnownHostsFile /dev/null 00:03:10.687 StrictHostKeyChecking no 00:03:10.687 PasswordAuthentication no 00:03:10.687 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:10.687 IdentitiesOnly yes 00:03:10.687 LogLevel FATAL 00:03:10.687 ForwardAgent yes 00:03:10.687 ForwardX11 yes 00:03:10.687 00:03:10.724 [Pipeline] withEnv 00:03:10.726 [Pipeline] { 00:03:10.740 [Pipeline] sh 00:03:11.049 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:11.049 source /etc/os-release 00:03:11.049 [[ -e /image.version ]] && img=$(< /image.version) 00:03:11.049 # Minimal, systemd-like check. 00:03:11.049 if [[ -e /.dockerenv ]]; then 00:03:11.049 # Clear garbage from the node's name: 00:03:11.049 # agt-er_autotest_547-896 -> autotest_547-896 00:03:11.049 # $HOSTNAME is the actual container id 00:03:11.049 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:11.049 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:11.049 # We can assume this is a mount from a host where container is running, 00:03:11.049 # so fetch its hostname to easily identify the target swarm worker. 00:03:11.049 container="$(< /etc/hostname) ($agent)" 00:03:11.049 else 00:03:11.049 # Fallback 00:03:11.049 container=$agent 00:03:11.049 fi 00:03:11.049 fi 00:03:11.049 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:11.049 00:03:11.319 [Pipeline] } 00:03:11.335 [Pipeline] // withEnv 00:03:11.343 [Pipeline] setCustomBuildProperty 00:03:11.358 [Pipeline] stage 00:03:11.360 [Pipeline] { (Tests) 00:03:11.377 [Pipeline] sh 00:03:11.658 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:11.931 [Pipeline] sh 00:03:12.211 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:12.226 [Pipeline] timeout 00:03:12.227 Timeout set to expire in 1 hr 0 min 00:03:12.229 [Pipeline] { 00:03:12.244 [Pipeline] sh 00:03:12.524 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:13.092 HEAD is now at eec618948 lib/reduce: Unmap backing dev blocks 00:03:13.104 [Pipeline] sh 00:03:13.385 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:13.658 [Pipeline] sh 00:03:13.943 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:13.961 [Pipeline] sh 00:03:14.239 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:14.498 ++ readlink -f spdk_repo 00:03:14.498 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:14.498 + [[ -n /home/vagrant/spdk_repo ]] 00:03:14.498 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:14.498 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:14.498 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:14.498 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:14.498 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:14.498 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:14.498 + cd /home/vagrant/spdk_repo 00:03:14.498 + source /etc/os-release 00:03:14.498 ++ NAME='Fedora Linux' 00:03:14.498 ++ VERSION='39 (Cloud Edition)' 00:03:14.498 ++ ID=fedora 00:03:14.498 ++ VERSION_ID=39 00:03:14.498 ++ VERSION_CODENAME= 00:03:14.498 ++ PLATFORM_ID=platform:f39 00:03:14.498 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:14.498 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:14.498 ++ LOGO=fedora-logo-icon 00:03:14.498 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:14.498 ++ HOME_URL=https://fedoraproject.org/ 00:03:14.498 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:14.498 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:14.498 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:14.498 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:14.498 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:14.498 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:14.498 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:14.498 ++ SUPPORT_END=2024-11-12 00:03:14.498 ++ VARIANT='Cloud Edition' 00:03:14.498 ++ VARIANT_ID=cloud 00:03:14.498 + uname -a 00:03:14.498 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:14.498 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:14.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:15.015 Hugepages 00:03:15.015 node hugesize free / total 00:03:15.015 node0 1048576kB 0 / 0 00:03:15.015 node0 2048kB 0 / 0 00:03:15.015 00:03:15.015 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.015 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:15.015 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:15.016 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:15.016 + rm -f /tmp/spdk-ld-path 00:03:15.016 + source autorun-spdk.conf 00:03:15.016 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.016 ++ SPDK_TEST_NVMF=1 00:03:15.016 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.016 ++ SPDK_TEST_URING=1 00:03:15.016 ++ SPDK_TEST_USDT=1 00:03:15.016 ++ SPDK_RUN_UBSAN=1 00:03:15.016 ++ NET_TYPE=virt 00:03:15.016 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:15.016 ++ RUN_NIGHTLY=0 00:03:15.016 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:15.016 + [[ -n '' ]] 00:03:15.016 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:15.016 + for M in /var/spdk/build-*-manifest.txt 00:03:15.016 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:15.016 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.016 + for M in /var/spdk/build-*-manifest.txt 00:03:15.016 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:15.016 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.016 + for M in /var/spdk/build-*-manifest.txt 00:03:15.016 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:15.016 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:15.016 ++ uname 00:03:15.016 + [[ Linux == \L\i\n\u\x ]] 00:03:15.016 + sudo dmesg -T 00:03:15.016 + sudo dmesg --clear 00:03:15.016 + dmesg_pid=5209 00:03:15.016 + [[ Fedora Linux == FreeBSD ]] 00:03:15.016 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:15.016 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:15.016 + sudo dmesg -Tw 00:03:15.016 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:15.016 + [[ -x /usr/src/fio-static/fio ]] 00:03:15.016 + export FIO_BIN=/usr/src/fio-static/fio 00:03:15.016 + FIO_BIN=/usr/src/fio-static/fio 00:03:15.016 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:15.016 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:15.016 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:15.016 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:15.016 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:15.016 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:15.016 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:15.016 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:15.016 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:15.274 09:40:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:15.274 09:40:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:15.274 09:40:40 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:15.274 09:40:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:15.274 09:40:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:15.274 09:40:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:15.274 09:40:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:15.274 09:40:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:15.274 09:40:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:15.274 09:40:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.274 09:40:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.275 09:40:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.275 09:40:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.275 09:40:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.275 09:40:40 -- paths/export.sh@5 -- $ export PATH 00:03:15.275 09:40:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.275 09:40:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:15.275 09:40:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:15.275 09:40:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733478040.XXXXXX 00:03:15.275 09:40:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733478040.UJpRT4 00:03:15.275 09:40:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:15.275 09:40:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:15.275 09:40:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:15.275 09:40:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:15.275 09:40:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:15.275 09:40:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:15.275 09:40:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:15.275 09:40:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:15.275 09:40:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:03:15.275 09:40:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:15.275 09:40:40 -- pm/common@17 -- $ local monitor 00:03:15.275 09:40:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.275 09:40:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.275 09:40:40 -- pm/common@25 -- $ sleep 1 00:03:15.275 09:40:40 -- pm/common@21 -- $ date +%s 00:03:15.275 09:40:40 -- pm/common@21 -- $ date +%s 00:03:15.275 09:40:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733478040 00:03:15.275 09:40:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733478040 00:03:15.275 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733478040_collect-cpu-load.pm.log 00:03:15.275 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733478040_collect-vmstat.pm.log 00:03:16.208 09:40:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:16.208 09:40:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:16.208 09:40:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:16.208 09:40:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:16.208 09:40:41 -- spdk/autobuild.sh@16 -- $ date -u 00:03:16.208 Fri Dec 6 09:40:41 AM UTC 2024 00:03:16.208 09:40:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:16.208 v25.01-pre-304-geec618948 00:03:16.208 09:40:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:16.208 09:40:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:16.208 09:40:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:16.208 09:40:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:16.208 09:40:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:16.208 09:40:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.208 ************************************ 00:03:16.208 START TEST ubsan 00:03:16.209 ************************************ 00:03:16.209 using ubsan 00:03:16.209 09:40:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:16.209 00:03:16.209 real 0m0.000s 00:03:16.209 user 0m0.000s 00:03:16.209 sys 0m0.000s 00:03:16.209 09:40:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:16.209 09:40:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:16.209 ************************************ 00:03:16.209 END TEST ubsan 00:03:16.209 ************************************ 00:03:16.467 09:40:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:16.467 09:40:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:16.467 09:40:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:16.467 09:40:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:16.467 09:40:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:16.467 09:40:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:16.467 09:40:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:16.467 09:40:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:16.467 09:40:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:03:16.467 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:16.467 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:16.725 Using 'verbs' RDMA provider 00:03:32.537 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:44.741 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:44.741 Creating mk/config.mk...done. 00:03:44.741 Creating mk/cc.flags.mk...done. 00:03:44.741 Type 'make' to build. 00:03:44.741 09:41:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:44.741 09:41:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:44.741 09:41:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:44.741 09:41:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.741 ************************************ 00:03:44.741 START TEST make 00:03:44.741 ************************************ 00:03:44.741 09:41:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:44.741 make[1]: Nothing to be done for 'all'. 00:03:56.945 The Meson build system 00:03:56.945 Version: 1.5.0 00:03:56.946 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:56.946 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:56.946 Build type: native build 00:03:56.946 Program cat found: YES (/usr/bin/cat) 00:03:56.946 Project name: DPDK 00:03:56.946 Project version: 24.03.0 00:03:56.946 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:56.946 C linker for the host machine: cc ld.bfd 2.40-14 00:03:56.946 Host machine cpu family: x86_64 00:03:56.946 Host machine cpu: x86_64 00:03:56.946 Message: ## Building in Developer Mode ## 00:03:56.946 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:56.946 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:56.946 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:56.946 Program python3 found: YES (/usr/bin/python3) 00:03:56.946 Program cat found: YES (/usr/bin/cat) 00:03:56.946 Compiler for C supports arguments -march=native: YES 00:03:56.946 Checking for size of "void *" : 8 00:03:56.946 Checking for size of "void *" : 8 (cached) 00:03:56.946 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:56.946 Library m found: YES 00:03:56.946 Library numa found: YES 00:03:56.946 Has header "numaif.h" : YES 00:03:56.946 Library fdt found: NO 00:03:56.946 Library execinfo found: NO 00:03:56.946 Has header "execinfo.h" : YES 00:03:56.946 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:56.946 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:56.946 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:56.946 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:56.946 Run-time dependency openssl found: YES 3.1.1 00:03:56.946 Run-time dependency libpcap found: YES 1.10.4 00:03:56.946 Has header "pcap.h" with dependency libpcap: YES 00:03:56.946 Compiler for C supports arguments -Wcast-qual: YES 00:03:56.946 Compiler for C supports arguments -Wdeprecated: YES 00:03:56.946 Compiler for C supports arguments -Wformat: YES 00:03:56.946 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:56.946 Compiler for C supports arguments -Wformat-security: NO 00:03:56.946 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:56.946 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:56.946 Compiler for C supports arguments -Wnested-externs: YES 00:03:56.946 Compiler for C supports arguments -Wold-style-definition: YES 00:03:56.946 Compiler for C supports arguments -Wpointer-arith: YES 00:03:56.946 Compiler for C supports arguments -Wsign-compare: YES 00:03:56.946 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:56.946 Compiler for C supports arguments -Wundef: YES 00:03:56.946 Compiler for C supports arguments -Wwrite-strings: YES 00:03:56.946 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:56.946 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:56.946 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:56.946 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:56.946 Program objdump found: YES (/usr/bin/objdump) 00:03:56.946 Compiler for C supports arguments -mavx512f: YES 00:03:56.946 Checking if "AVX512 checking" compiles: YES 00:03:56.946 Fetching value of define "__SSE4_2__" : 1 00:03:56.946 Fetching value of define "__AES__" : 1 00:03:56.946 Fetching value of define "__AVX__" : 1 00:03:56.946 Fetching value of define "__AVX2__" : 1 00:03:56.946 Fetching value of define "__AVX512BW__" : (undefined) 00:03:56.946 Fetching value of define "__AVX512CD__" : (undefined) 00:03:56.946 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:56.946 Fetching value of define "__AVX512F__" : (undefined) 00:03:56.946 Fetching value of define "__AVX512VL__" : (undefined) 00:03:56.946 Fetching value of define "__PCLMUL__" : 1 00:03:56.946 Fetching value of define "__RDRND__" : 1 00:03:56.946 Fetching value of define "__RDSEED__" : 1 00:03:56.946 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:56.946 Fetching value of define "__znver1__" : (undefined) 00:03:56.946 Fetching value of define "__znver2__" : (undefined) 00:03:56.946 Fetching value of define "__znver3__" : (undefined) 00:03:56.946 Fetching value of define "__znver4__" : (undefined) 00:03:56.946 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:56.946 Message: lib/log: Defining dependency "log" 00:03:56.946 Message: lib/kvargs: Defining dependency "kvargs" 00:03:56.946 Message: lib/telemetry: Defining dependency "telemetry" 00:03:56.946 Checking for function "getentropy" : NO 00:03:56.946 Message: lib/eal: Defining dependency "eal" 00:03:56.946 Message: lib/ring: Defining dependency "ring" 00:03:56.946 Message: lib/rcu: Defining dependency "rcu" 00:03:56.946 Message: lib/mempool: Defining dependency "mempool" 00:03:56.946 Message: lib/mbuf: Defining dependency "mbuf" 00:03:56.946 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:56.946 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:56.946 Compiler for C supports arguments -mpclmul: YES 00:03:56.946 Compiler for C supports arguments -maes: YES 00:03:56.946 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:56.946 Compiler for C supports arguments -mavx512bw: YES 00:03:56.946 Compiler for C supports arguments -mavx512dq: YES 00:03:56.946 Compiler for C supports arguments -mavx512vl: YES 00:03:56.946 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:56.946 Compiler for C supports arguments -mavx2: YES 00:03:56.946 Compiler for C supports arguments -mavx: YES 00:03:56.946 Message: lib/net: Defining dependency "net" 00:03:56.946 Message: lib/meter: Defining dependency "meter" 00:03:56.946 Message: lib/ethdev: Defining dependency "ethdev" 00:03:56.946 Message: lib/pci: Defining dependency "pci" 00:03:56.946 Message: lib/cmdline: Defining dependency "cmdline" 00:03:56.946 Message: lib/hash: Defining dependency "hash" 00:03:56.946 Message: lib/timer: Defining dependency "timer" 00:03:56.946 Message: lib/compressdev: Defining dependency "compressdev" 00:03:56.946 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:56.946 Message: lib/dmadev: Defining dependency "dmadev" 00:03:56.946 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:56.946 Message: lib/power: Defining dependency "power" 00:03:56.946 Message: lib/reorder: Defining dependency "reorder" 00:03:56.946 Message: lib/security: Defining dependency "security" 00:03:56.946 Has header "linux/userfaultfd.h" : YES 00:03:56.946 Has header "linux/vduse.h" : YES 00:03:56.946 Message: lib/vhost: Defining dependency "vhost" 00:03:56.946 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:56.946 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:56.946 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:56.946 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:56.946 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:56.946 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:56.946 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:56.946 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:56.946 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:56.946 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:56.946 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:56.946 Configuring doxy-api-html.conf using configuration 00:03:56.946 Configuring doxy-api-man.conf using configuration 00:03:56.946 Program mandb found: YES (/usr/bin/mandb) 00:03:56.946 Program sphinx-build found: NO 00:03:56.946 Configuring rte_build_config.h using configuration 00:03:56.946 Message: 00:03:56.946 ================= 00:03:56.946 Applications Enabled 00:03:56.946 ================= 00:03:56.946 00:03:56.946 apps: 00:03:56.946 00:03:56.946 00:03:56.946 Message: 00:03:56.946 ================= 00:03:56.946 Libraries Enabled 00:03:56.946 ================= 00:03:56.946 00:03:56.946 libs: 00:03:56.946 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:56.946 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:56.946 cryptodev, dmadev, power, reorder, security, vhost, 00:03:56.946 00:03:56.946 Message: 00:03:56.946 =============== 00:03:56.946 Drivers Enabled 00:03:56.946 =============== 00:03:56.946 00:03:56.946 common: 00:03:56.946 00:03:56.946 bus: 00:03:56.946 pci, vdev, 00:03:56.946 mempool: 00:03:56.946 ring, 00:03:56.946 dma: 00:03:56.946 00:03:56.946 net: 00:03:56.946 00:03:56.946 crypto: 00:03:56.946 00:03:56.946 compress: 00:03:56.946 00:03:56.946 vdpa: 00:03:56.946 00:03:56.946 00:03:56.946 Message: 00:03:56.946 ================= 00:03:56.946 Content Skipped 00:03:56.946 ================= 00:03:56.946 00:03:56.946 apps: 00:03:56.946 dumpcap: explicitly disabled via build config 00:03:56.946 graph: explicitly disabled via build config 00:03:56.946 pdump: explicitly disabled via build config 00:03:56.946 proc-info: explicitly disabled via build config 00:03:56.946 test-acl: explicitly disabled via build config 00:03:56.946 test-bbdev: explicitly disabled via build config 00:03:56.946 test-cmdline: explicitly disabled via build config 00:03:56.946 test-compress-perf: explicitly disabled via build config 00:03:56.946 test-crypto-perf: explicitly disabled via build config 00:03:56.946 test-dma-perf: explicitly disabled via build config 00:03:56.946 test-eventdev: explicitly disabled via build config 00:03:56.946 test-fib: explicitly disabled via build config 00:03:56.946 test-flow-perf: explicitly disabled via build config 00:03:56.946 test-gpudev: explicitly disabled via build config 00:03:56.946 test-mldev: explicitly disabled via build config 00:03:56.946 test-pipeline: explicitly disabled via build config 00:03:56.946 test-pmd: explicitly disabled via build config 00:03:56.946 test-regex: explicitly disabled via build config 00:03:56.946 test-sad: explicitly disabled via build config 00:03:56.946 test-security-perf: explicitly disabled via build config 00:03:56.946 00:03:56.946 libs: 00:03:56.946 argparse: explicitly disabled via build config 00:03:56.946 metrics: explicitly disabled via build config 00:03:56.947 acl: explicitly disabled via build config 00:03:56.947 bbdev: explicitly disabled via build config 00:03:56.947 bitratestats: explicitly disabled via build config 00:03:56.947 bpf: explicitly disabled via build config 00:03:56.947 cfgfile: explicitly disabled via build config 00:03:56.947 distributor: explicitly disabled via build config 00:03:56.947 efd: explicitly disabled via build config 00:03:56.947 eventdev: explicitly disabled via build config 00:03:56.947 dispatcher: explicitly disabled via build config 00:03:56.947 gpudev: explicitly disabled via build config 00:03:56.947 gro: explicitly disabled via build config 00:03:56.947 gso: explicitly disabled via build config 00:03:56.947 ip_frag: explicitly disabled via build config 00:03:56.947 jobstats: explicitly disabled via build config 00:03:56.947 latencystats: explicitly disabled via build config 00:03:56.947 lpm: explicitly disabled via build config 00:03:56.947 member: explicitly disabled via build config 00:03:56.947 pcapng: explicitly disabled via build config 00:03:56.947 rawdev: explicitly disabled via build config 00:03:56.947 regexdev: explicitly disabled via build config 00:03:56.947 mldev: explicitly disabled via build config 00:03:56.947 rib: explicitly disabled via build config 00:03:56.947 sched: explicitly disabled via build config 00:03:56.947 stack: explicitly disabled via build config 00:03:56.947 ipsec: explicitly disabled via build config 00:03:56.947 pdcp: explicitly disabled via build config 00:03:56.947 fib: explicitly disabled via build config 00:03:56.947 port: explicitly disabled via build config 00:03:56.947 pdump: explicitly disabled via build config 00:03:56.947 table: explicitly disabled via build config 00:03:56.947 pipeline: explicitly disabled via build config 00:03:56.947 graph: explicitly disabled via build config 00:03:56.947 node: explicitly disabled via build config 00:03:56.947 00:03:56.947 drivers: 00:03:56.947 common/cpt: not in enabled drivers build config 00:03:56.947 common/dpaax: not in enabled drivers build config 00:03:56.947 common/iavf: not in enabled drivers build config 00:03:56.947 common/idpf: not in enabled drivers build config 00:03:56.947 common/ionic: not in enabled drivers build config 00:03:56.947 common/mvep: not in enabled drivers build config 00:03:56.947 common/octeontx: not in enabled drivers build config 00:03:56.947 bus/auxiliary: not in enabled drivers build config 00:03:56.947 bus/cdx: not in enabled drivers build config 00:03:56.947 bus/dpaa: not in enabled drivers build config 00:03:56.947 bus/fslmc: not in enabled drivers build config 00:03:56.947 bus/ifpga: not in enabled drivers build config 00:03:56.947 bus/platform: not in enabled drivers build config 00:03:56.947 bus/uacce: not in enabled drivers build config 00:03:56.947 bus/vmbus: not in enabled drivers build config 00:03:56.947 common/cnxk: not in enabled drivers build config 00:03:56.947 common/mlx5: not in enabled drivers build config 00:03:56.947 common/nfp: not in enabled drivers build config 00:03:56.947 common/nitrox: not in enabled drivers build config 00:03:56.947 common/qat: not in enabled drivers build config 00:03:56.947 common/sfc_efx: not in enabled drivers build config 00:03:56.947 mempool/bucket: not in enabled drivers build config 00:03:56.947 mempool/cnxk: not in enabled drivers build config 00:03:56.947 mempool/dpaa: not in enabled drivers build config 00:03:56.947 mempool/dpaa2: not in enabled drivers build config 00:03:56.947 mempool/octeontx: not in enabled drivers build config 00:03:56.947 mempool/stack: not in enabled drivers build config 00:03:56.947 dma/cnxk: not in enabled drivers build config 00:03:56.947 dma/dpaa: not in enabled drivers build config 00:03:56.947 dma/dpaa2: not in enabled drivers build config 00:03:56.947 dma/hisilicon: not in enabled drivers build config 00:03:56.947 dma/idxd: not in enabled drivers build config 00:03:56.947 dma/ioat: not in enabled drivers build config 00:03:56.947 dma/skeleton: not in enabled drivers build config 00:03:56.947 net/af_packet: not in enabled drivers build config 00:03:56.947 net/af_xdp: not in enabled drivers build config 00:03:56.947 net/ark: not in enabled drivers build config 00:03:56.947 net/atlantic: not in enabled drivers build config 00:03:56.947 net/avp: not in enabled drivers build config 00:03:56.947 net/axgbe: not in enabled drivers build config 00:03:56.947 net/bnx2x: not in enabled drivers build config 00:03:56.947 net/bnxt: not in enabled drivers build config 00:03:56.947 net/bonding: not in enabled drivers build config 00:03:56.947 net/cnxk: not in enabled drivers build config 00:03:56.947 net/cpfl: not in enabled drivers build config 00:03:56.947 net/cxgbe: not in enabled drivers build config 00:03:56.947 net/dpaa: not in enabled drivers build config 00:03:56.947 net/dpaa2: not in enabled drivers build config 00:03:56.947 net/e1000: not in enabled drivers build config 00:03:56.947 net/ena: not in enabled drivers build config 00:03:56.947 net/enetc: not in enabled drivers build config 00:03:56.947 net/enetfec: not in enabled drivers build config 00:03:56.947 net/enic: not in enabled drivers build config 00:03:56.947 net/failsafe: not in enabled drivers build config 00:03:56.947 net/fm10k: not in enabled drivers build config 00:03:56.947 net/gve: not in enabled drivers build config 00:03:56.947 net/hinic: not in enabled drivers build config 00:03:56.947 net/hns3: not in enabled drivers build config 00:03:56.947 net/i40e: not in enabled drivers build config 00:03:56.947 net/iavf: not in enabled drivers build config 00:03:56.947 net/ice: not in enabled drivers build config 00:03:56.947 net/idpf: not in enabled drivers build config 00:03:56.947 net/igc: not in enabled drivers build config 00:03:56.947 net/ionic: not in enabled drivers build config 00:03:56.947 net/ipn3ke: not in enabled drivers build config 00:03:56.947 net/ixgbe: not in enabled drivers build config 00:03:56.947 net/mana: not in enabled drivers build config 00:03:56.947 net/memif: not in enabled drivers build config 00:03:56.947 net/mlx4: not in enabled drivers build config 00:03:56.947 net/mlx5: not in enabled drivers build config 00:03:56.947 net/mvneta: not in enabled drivers build config 00:03:56.947 net/mvpp2: not in enabled drivers build config 00:03:56.947 net/netvsc: not in enabled drivers build config 00:03:56.947 net/nfb: not in enabled drivers build config 00:03:56.947 net/nfp: not in enabled drivers build config 00:03:56.947 net/ngbe: not in enabled drivers build config 00:03:56.947 net/null: not in enabled drivers build config 00:03:56.947 net/octeontx: not in enabled drivers build config 00:03:56.947 net/octeon_ep: not in enabled drivers build config 00:03:56.947 net/pcap: not in enabled drivers build config 00:03:56.947 net/pfe: not in enabled drivers build config 00:03:56.947 net/qede: not in enabled drivers build config 00:03:56.947 net/ring: not in enabled drivers build config 00:03:56.947 net/sfc: not in enabled drivers build config 00:03:56.947 net/softnic: not in enabled drivers build config 00:03:56.947 net/tap: not in enabled drivers build config 00:03:56.947 net/thunderx: not in enabled drivers build config 00:03:56.947 net/txgbe: not in enabled drivers build config 00:03:56.947 net/vdev_netvsc: not in enabled drivers build config 00:03:56.947 net/vhost: not in enabled drivers build config 00:03:56.947 net/virtio: not in enabled drivers build config 00:03:56.947 net/vmxnet3: not in enabled drivers build config 00:03:56.947 raw/*: missing internal dependency, "rawdev" 00:03:56.947 crypto/armv8: not in enabled drivers build config 00:03:56.947 crypto/bcmfs: not in enabled drivers build config 00:03:56.947 crypto/caam_jr: not in enabled drivers build config 00:03:56.947 crypto/ccp: not in enabled drivers build config 00:03:56.947 crypto/cnxk: not in enabled drivers build config 00:03:56.947 crypto/dpaa_sec: not in enabled drivers build config 00:03:56.947 crypto/dpaa2_sec: not in enabled drivers build config 00:03:56.947 crypto/ipsec_mb: not in enabled drivers build config 00:03:56.947 crypto/mlx5: not in enabled drivers build config 00:03:56.947 crypto/mvsam: not in enabled drivers build config 00:03:56.947 crypto/nitrox: not in enabled drivers build config 00:03:56.947 crypto/null: not in enabled drivers build config 00:03:56.947 crypto/octeontx: not in enabled drivers build config 00:03:56.947 crypto/openssl: not in enabled drivers build config 00:03:56.947 crypto/scheduler: not in enabled drivers build config 00:03:56.947 crypto/uadk: not in enabled drivers build config 00:03:56.947 crypto/virtio: not in enabled drivers build config 00:03:56.947 compress/isal: not in enabled drivers build config 00:03:56.947 compress/mlx5: not in enabled drivers build config 00:03:56.947 compress/nitrox: not in enabled drivers build config 00:03:56.947 compress/octeontx: not in enabled drivers build config 00:03:56.947 compress/zlib: not in enabled drivers build config 00:03:56.947 regex/*: missing internal dependency, "regexdev" 00:03:56.947 ml/*: missing internal dependency, "mldev" 00:03:56.947 vdpa/ifc: not in enabled drivers build config 00:03:56.947 vdpa/mlx5: not in enabled drivers build config 00:03:56.947 vdpa/nfp: not in enabled drivers build config 00:03:56.947 vdpa/sfc: not in enabled drivers build config 00:03:56.947 event/*: missing internal dependency, "eventdev" 00:03:56.947 baseband/*: missing internal dependency, "bbdev" 00:03:56.947 gpu/*: missing internal dependency, "gpudev" 00:03:56.947 00:03:56.947 00:03:56.947 Build targets in project: 85 00:03:56.947 00:03:56.947 DPDK 24.03.0 00:03:56.947 00:03:56.947 User defined options 00:03:56.947 buildtype : debug 00:03:56.947 default_library : shared 00:03:56.947 libdir : lib 00:03:56.947 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:56.947 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:56.947 c_link_args : 00:03:56.947 cpu_instruction_set: native 00:03:56.947 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:56.947 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:56.947 enable_docs : false 00:03:56.947 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:56.947 enable_kmods : false 00:03:56.947 max_lcores : 128 00:03:56.947 tests : false 00:03:56.947 00:03:56.947 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:56.947 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:56.948 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:56.948 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:56.948 [3/268] Linking static target lib/librte_kvargs.a 00:03:56.948 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:56.948 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:56.948 [6/268] Linking static target lib/librte_log.a 00:03:57.207 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.467 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:57.726 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:57.726 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:57.726 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:57.726 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:57.726 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:57.726 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:57.726 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:57.726 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:57.986 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.986 [18/268] Linking target lib/librte_log.so.24.1 00:03:57.986 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:57.986 [20/268] Linking static target lib/librte_telemetry.a 00:03:58.245 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:58.245 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:58.504 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:58.504 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:58.504 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:58.504 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:58.504 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:58.504 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:58.763 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:58.763 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:58.763 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:58.763 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:58.763 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:59.022 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.022 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:59.280 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:59.280 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:59.280 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:59.538 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:59.538 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:59.538 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:59.538 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:59.538 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:59.538 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:59.538 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:59.797 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:59.797 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:59.797 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:59.797 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:00.054 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:00.311 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:00.311 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:00.569 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:00.569 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:00.569 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:00.569 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:00.826 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:00.826 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:00.826 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:00.826 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:01.084 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:01.084 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:01.343 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:01.617 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:01.617 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:01.617 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:01.617 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:01.617 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:01.617 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:01.875 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:02.134 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:02.134 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:02.134 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:02.134 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:02.392 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:02.392 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:02.392 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:02.392 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:02.392 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:02.392 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:02.650 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:02.650 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:02.909 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:02.909 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:02.909 [85/268] Linking static target lib/librte_ring.a 00:04:02.909 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:02.909 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:02.909 [88/268] Linking static target lib/librte_eal.a 00:04:03.167 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:03.167 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:03.167 [91/268] Linking static target lib/librte_rcu.a 00:04:03.167 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:03.426 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:03.426 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.426 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:03.685 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:03.685 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:03.685 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:03.685 [99/268] Linking static target lib/librte_mempool.a 00:04:03.685 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.685 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:03.944 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:03.944 [103/268] Linking static target lib/librte_mbuf.a 00:04:03.944 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:03.944 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:04.203 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:04.203 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:04.203 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:04.203 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:04.203 [110/268] Linking static target lib/librte_meter.a 00:04:04.203 [111/268] Linking static target lib/librte_net.a 00:04:04.462 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:04.721 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:04.721 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.721 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.721 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:05.039 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:05.039 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.039 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.297 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:05.556 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:05.815 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:05.815 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:06.074 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:06.074 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:06.074 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:06.074 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:06.074 [128/268] Linking static target lib/librte_pci.a 00:04:06.074 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:06.074 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:06.074 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:06.074 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:06.334 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:06.334 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:06.334 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:06.334 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:06.334 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:06.334 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:06.334 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:06.334 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:06.334 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:06.334 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.334 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:06.593 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:06.593 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:06.593 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:06.851 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:06.851 [148/268] Linking static target lib/librte_ethdev.a 00:04:06.851 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:06.851 [150/268] Linking static target lib/librte_cmdline.a 00:04:06.851 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:07.111 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:07.370 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:07.371 [154/268] Linking static target lib/librte_timer.a 00:04:07.371 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:07.371 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:07.371 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:07.371 [158/268] Linking static target lib/librte_hash.a 00:04:07.630 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:07.630 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:07.630 [161/268] Linking static target lib/librte_compressdev.a 00:04:07.889 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:07.889 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.148 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:08.148 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:08.407 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:08.407 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:08.407 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:08.666 [169/268] Linking static target lib/librte_dmadev.a 00:04:08.666 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.666 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:08.666 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:08.666 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:08.666 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.666 [175/268] Linking static target lib/librte_cryptodev.a 00:04:08.666 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:08.666 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.234 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:09.235 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:09.235 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:09.235 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:09.493 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:09.493 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:09.493 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.493 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:09.493 [186/268] Linking static target lib/librte_power.a 00:04:10.060 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:10.060 [188/268] Linking static target lib/librte_reorder.a 00:04:10.060 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:10.060 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:10.060 [191/268] Linking static target lib/librte_security.a 00:04:10.060 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:10.318 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:10.576 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:10.576 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.835 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.835 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.093 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:11.093 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:11.351 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.351 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:11.351 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:11.609 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:11.609 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:11.895 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:11.895 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:12.152 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:12.152 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:12.152 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:12.152 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:12.152 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:12.410 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:12.410 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:12.410 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:12.410 [215/268] Linking static target drivers/librte_bus_vdev.a 00:04:12.410 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:12.410 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:12.410 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:12.410 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:12.410 [220/268] Linking static target drivers/librte_bus_pci.a 00:04:12.410 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:12.410 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:12.668 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:12.668 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.668 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:12.668 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:12.668 [227/268] Linking static target drivers/librte_mempool_ring.a 00:04:12.926 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.491 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:13.491 [230/268] Linking static target lib/librte_vhost.a 00:04:14.426 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.685 [232/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.685 [233/268] Linking target lib/librte_eal.so.24.1 00:04:14.685 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:14.685 [235/268] Linking target lib/librte_timer.so.24.1 00:04:14.685 [236/268] Linking target lib/librte_meter.so.24.1 00:04:14.943 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:14.943 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:14.943 [239/268] Linking target lib/librte_ring.so.24.1 00:04:14.943 [240/268] Linking target lib/librte_pci.so.24.1 00:04:14.943 [241/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.943 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:14.943 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:14.943 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:14.943 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:14.943 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:14.943 [247/268] Linking target lib/librte_rcu.so.24.1 00:04:14.943 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:14.943 [249/268] Linking target lib/librte_mempool.so.24.1 00:04:15.202 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:15.202 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:15.202 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:15.202 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:15.460 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:15.460 [255/268] Linking target lib/librte_net.so.24.1 00:04:15.460 [256/268] Linking target lib/librte_compressdev.so.24.1 00:04:15.460 [257/268] Linking target lib/librte_reorder.so.24.1 00:04:15.460 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:15.719 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:15.719 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:15.719 [261/268] Linking target lib/librte_hash.so.24.1 00:04:15.719 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:15.719 [263/268] Linking target lib/librte_security.so.24.1 00:04:15.719 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:15.719 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:15.719 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:15.978 [267/268] Linking target lib/librte_power.so.24.1 00:04:15.978 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:15.978 INFO: autodetecting backend as ninja 00:04:15.978 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:42.538 CC lib/ut/ut.o 00:04:42.538 CC lib/log/log_flags.o 00:04:42.538 CC lib/log/log_deprecated.o 00:04:42.538 CC lib/log/log.o 00:04:42.538 CC lib/ut_mock/mock.o 00:04:42.538 LIB libspdk_ut.a 00:04:42.538 LIB libspdk_ut_mock.a 00:04:42.538 LIB libspdk_log.a 00:04:42.538 SO libspdk_ut.so.2.0 00:04:42.538 SO libspdk_ut_mock.so.6.0 00:04:42.538 SO libspdk_log.so.7.1 00:04:42.538 SYMLINK libspdk_ut.so 00:04:42.538 SYMLINK libspdk_ut_mock.so 00:04:42.538 SYMLINK libspdk_log.so 00:04:42.538 CC lib/util/base64.o 00:04:42.538 CC lib/util/cpuset.o 00:04:42.538 CC lib/util/bit_array.o 00:04:42.538 CC lib/util/crc16.o 00:04:42.538 CC lib/util/crc32c.o 00:04:42.538 CC lib/util/crc32.o 00:04:42.538 CXX lib/trace_parser/trace.o 00:04:42.538 CC lib/ioat/ioat.o 00:04:42.538 CC lib/dma/dma.o 00:04:42.538 CC lib/vfio_user/host/vfio_user_pci.o 00:04:42.538 CC lib/util/crc32_ieee.o 00:04:42.538 CC lib/util/crc64.o 00:04:42.538 CC lib/vfio_user/host/vfio_user.o 00:04:42.538 CC lib/util/dif.o 00:04:42.538 CC lib/util/fd.o 00:04:42.538 CC lib/util/fd_group.o 00:04:42.538 LIB libspdk_dma.a 00:04:42.538 CC lib/util/file.o 00:04:42.538 SO libspdk_dma.so.5.0 00:04:42.538 CC lib/util/hexlify.o 00:04:42.538 LIB libspdk_ioat.a 00:04:42.538 SYMLINK libspdk_dma.so 00:04:42.538 CC lib/util/iov.o 00:04:42.538 SO libspdk_ioat.so.7.0 00:04:42.538 CC lib/util/math.o 00:04:42.538 LIB libspdk_vfio_user.a 00:04:42.538 SYMLINK libspdk_ioat.so 00:04:42.538 CC lib/util/net.o 00:04:42.538 CC lib/util/pipe.o 00:04:42.538 SO libspdk_vfio_user.so.5.0 00:04:42.538 CC lib/util/strerror_tls.o 00:04:42.538 CC lib/util/string.o 00:04:42.538 SYMLINK libspdk_vfio_user.so 00:04:42.538 CC lib/util/uuid.o 00:04:42.538 CC lib/util/xor.o 00:04:42.538 CC lib/util/zipf.o 00:04:42.538 CC lib/util/md5.o 00:04:42.538 LIB libspdk_util.a 00:04:42.538 SO libspdk_util.so.10.1 00:04:42.538 LIB libspdk_trace_parser.a 00:04:42.538 SYMLINK libspdk_util.so 00:04:42.538 SO libspdk_trace_parser.so.6.0 00:04:42.538 SYMLINK libspdk_trace_parser.so 00:04:42.538 CC lib/conf/conf.o 00:04:42.538 CC lib/vmd/vmd.o 00:04:42.538 CC lib/idxd/idxd.o 00:04:42.538 CC lib/vmd/led.o 00:04:42.538 CC lib/rdma_utils/rdma_utils.o 00:04:42.538 CC lib/idxd/idxd_user.o 00:04:42.538 CC lib/idxd/idxd_kernel.o 00:04:42.538 CC lib/json/json_parse.o 00:04:42.538 CC lib/json/json_util.o 00:04:42.538 CC lib/env_dpdk/env.o 00:04:42.538 CC lib/env_dpdk/memory.o 00:04:42.538 CC lib/env_dpdk/pci.o 00:04:42.538 LIB libspdk_conf.a 00:04:42.538 CC lib/json/json_write.o 00:04:42.538 CC lib/env_dpdk/init.o 00:04:42.538 CC lib/env_dpdk/threads.o 00:04:42.538 SO libspdk_conf.so.6.0 00:04:42.538 LIB libspdk_rdma_utils.a 00:04:42.538 SO libspdk_rdma_utils.so.1.0 00:04:42.538 SYMLINK libspdk_conf.so 00:04:42.538 CC lib/env_dpdk/pci_ioat.o 00:04:42.538 SYMLINK libspdk_rdma_utils.so 00:04:42.538 CC lib/env_dpdk/pci_virtio.o 00:04:42.538 CC lib/env_dpdk/pci_vmd.o 00:04:42.538 CC lib/env_dpdk/pci_idxd.o 00:04:42.538 CC lib/env_dpdk/pci_event.o 00:04:42.538 LIB libspdk_json.a 00:04:42.538 CC lib/env_dpdk/sigbus_handler.o 00:04:42.538 SO libspdk_json.so.6.0 00:04:42.538 CC lib/env_dpdk/pci_dpdk.o 00:04:42.538 LIB libspdk_idxd.a 00:04:42.538 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:42.538 SYMLINK libspdk_json.so 00:04:42.538 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:42.538 SO libspdk_idxd.so.12.1 00:04:42.538 LIB libspdk_vmd.a 00:04:42.538 SO libspdk_vmd.so.6.0 00:04:42.538 SYMLINK libspdk_idxd.so 00:04:42.538 SYMLINK libspdk_vmd.so 00:04:42.538 CC lib/rdma_provider/common.o 00:04:42.538 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:42.538 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:42.538 CC lib/jsonrpc/jsonrpc_server.o 00:04:42.538 CC lib/jsonrpc/jsonrpc_client.o 00:04:42.538 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:42.538 LIB libspdk_rdma_provider.a 00:04:42.538 SO libspdk_rdma_provider.so.7.0 00:04:42.538 SYMLINK libspdk_rdma_provider.so 00:04:42.538 LIB libspdk_jsonrpc.a 00:04:42.538 SO libspdk_jsonrpc.so.6.0 00:04:42.538 SYMLINK libspdk_jsonrpc.so 00:04:42.538 LIB libspdk_env_dpdk.a 00:04:42.538 SO libspdk_env_dpdk.so.15.1 00:04:42.538 CC lib/rpc/rpc.o 00:04:42.538 SYMLINK libspdk_env_dpdk.so 00:04:42.538 LIB libspdk_rpc.a 00:04:42.538 SO libspdk_rpc.so.6.0 00:04:42.539 SYMLINK libspdk_rpc.so 00:04:42.539 CC lib/keyring/keyring.o 00:04:42.539 CC lib/keyring/keyring_rpc.o 00:04:42.539 CC lib/notify/notify.o 00:04:42.539 CC lib/notify/notify_rpc.o 00:04:42.539 CC lib/trace/trace.o 00:04:42.539 CC lib/trace/trace_flags.o 00:04:42.539 CC lib/trace/trace_rpc.o 00:04:42.539 LIB libspdk_notify.a 00:04:42.539 SO libspdk_notify.so.6.0 00:04:42.539 LIB libspdk_keyring.a 00:04:42.539 SO libspdk_keyring.so.2.0 00:04:42.797 SYMLINK libspdk_notify.so 00:04:42.797 LIB libspdk_trace.a 00:04:42.797 SYMLINK libspdk_keyring.so 00:04:42.797 SO libspdk_trace.so.11.0 00:04:42.797 SYMLINK libspdk_trace.so 00:04:43.056 CC lib/thread/thread.o 00:04:43.056 CC lib/thread/iobuf.o 00:04:43.056 CC lib/sock/sock_rpc.o 00:04:43.056 CC lib/sock/sock.o 00:04:43.625 LIB libspdk_sock.a 00:04:43.625 SO libspdk_sock.so.10.0 00:04:43.625 SYMLINK libspdk_sock.so 00:04:43.884 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:43.884 CC lib/nvme/nvme_ctrlr.o 00:04:43.884 CC lib/nvme/nvme_fabric.o 00:04:43.884 CC lib/nvme/nvme_ns_cmd.o 00:04:43.884 CC lib/nvme/nvme_ns.o 00:04:43.884 CC lib/nvme/nvme_pcie.o 00:04:43.884 CC lib/nvme/nvme_pcie_common.o 00:04:43.884 CC lib/nvme/nvme.o 00:04:43.884 CC lib/nvme/nvme_qpair.o 00:04:44.820 LIB libspdk_thread.a 00:04:44.820 SO libspdk_thread.so.11.0 00:04:44.820 SYMLINK libspdk_thread.so 00:04:44.820 CC lib/nvme/nvme_quirks.o 00:04:44.820 CC lib/nvme/nvme_transport.o 00:04:44.820 CC lib/nvme/nvme_discovery.o 00:04:44.820 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:44.820 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:44.820 CC lib/nvme/nvme_tcp.o 00:04:45.079 CC lib/nvme/nvme_opal.o 00:04:45.079 CC lib/accel/accel.o 00:04:45.338 CC lib/blob/blobstore.o 00:04:45.338 CC lib/blob/request.o 00:04:45.338 CC lib/accel/accel_rpc.o 00:04:45.338 CC lib/accel/accel_sw.o 00:04:45.597 CC lib/nvme/nvme_io_msg.o 00:04:45.597 CC lib/blob/zeroes.o 00:04:45.597 CC lib/nvme/nvme_poll_group.o 00:04:45.597 CC lib/nvme/nvme_zns.o 00:04:45.597 CC lib/blob/blob_bs_dev.o 00:04:45.857 CC lib/init/json_config.o 00:04:45.857 CC lib/nvme/nvme_stubs.o 00:04:45.857 CC lib/virtio/virtio.o 00:04:46.117 CC lib/init/subsystem.o 00:04:46.117 CC lib/init/subsystem_rpc.o 00:04:46.117 LIB libspdk_accel.a 00:04:46.117 CC lib/nvme/nvme_auth.o 00:04:46.117 CC lib/virtio/virtio_vhost_user.o 00:04:46.117 CC lib/virtio/virtio_vfio_user.o 00:04:46.117 SO libspdk_accel.so.16.0 00:04:46.376 CC lib/init/rpc.o 00:04:46.376 CC lib/nvme/nvme_cuse.o 00:04:46.376 SYMLINK libspdk_accel.so 00:04:46.376 CC lib/nvme/nvme_rdma.o 00:04:46.376 CC lib/virtio/virtio_pci.o 00:04:46.376 LIB libspdk_init.a 00:04:46.376 CC lib/fsdev/fsdev.o 00:04:46.376 CC lib/bdev/bdev.o 00:04:46.376 SO libspdk_init.so.6.0 00:04:46.376 CC lib/bdev/bdev_rpc.o 00:04:46.635 CC lib/bdev/bdev_zone.o 00:04:46.635 SYMLINK libspdk_init.so 00:04:46.635 CC lib/bdev/part.o 00:04:46.635 CC lib/bdev/scsi_nvme.o 00:04:46.635 LIB libspdk_virtio.a 00:04:46.894 SO libspdk_virtio.so.7.0 00:04:46.894 CC lib/fsdev/fsdev_io.o 00:04:46.894 CC lib/fsdev/fsdev_rpc.o 00:04:46.894 SYMLINK libspdk_virtio.so 00:04:46.894 CC lib/event/app.o 00:04:46.894 CC lib/event/reactor.o 00:04:46.894 CC lib/event/log_rpc.o 00:04:47.154 CC lib/event/app_rpc.o 00:04:47.154 CC lib/event/scheduler_static.o 00:04:47.154 LIB libspdk_fsdev.a 00:04:47.154 SO libspdk_fsdev.so.2.0 00:04:47.154 SYMLINK libspdk_fsdev.so 00:04:47.413 LIB libspdk_event.a 00:04:47.413 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:47.413 SO libspdk_event.so.14.0 00:04:47.672 SYMLINK libspdk_event.so 00:04:47.672 LIB libspdk_nvme.a 00:04:47.932 SO libspdk_nvme.so.15.0 00:04:48.192 LIB libspdk_fuse_dispatcher.a 00:04:48.192 SO libspdk_fuse_dispatcher.so.1.0 00:04:48.192 SYMLINK libspdk_fuse_dispatcher.so 00:04:48.192 SYMLINK libspdk_nvme.so 00:04:48.192 LIB libspdk_blob.a 00:04:48.452 SO libspdk_blob.so.12.0 00:04:48.452 SYMLINK libspdk_blob.so 00:04:48.712 CC lib/lvol/lvol.o 00:04:48.712 CC lib/blobfs/blobfs.o 00:04:48.712 CC lib/blobfs/tree.o 00:04:49.298 LIB libspdk_bdev.a 00:04:49.298 SO libspdk_bdev.so.17.0 00:04:49.298 SYMLINK libspdk_bdev.so 00:04:49.557 LIB libspdk_blobfs.a 00:04:49.557 CC lib/scsi/dev.o 00:04:49.557 CC lib/scsi/lun.o 00:04:49.557 CC lib/scsi/port.o 00:04:49.557 CC lib/scsi/scsi.o 00:04:49.557 CC lib/ftl/ftl_core.o 00:04:49.557 CC lib/ublk/ublk.o 00:04:49.557 CC lib/nbd/nbd.o 00:04:49.557 CC lib/nvmf/ctrlr.o 00:04:49.557 SO libspdk_blobfs.so.11.0 00:04:49.557 LIB libspdk_lvol.a 00:04:49.816 SO libspdk_lvol.so.11.0 00:04:49.816 SYMLINK libspdk_blobfs.so 00:04:49.816 CC lib/nbd/nbd_rpc.o 00:04:49.816 SYMLINK libspdk_lvol.so 00:04:49.816 CC lib/scsi/scsi_bdev.o 00:04:49.816 CC lib/scsi/scsi_pr.o 00:04:49.816 CC lib/scsi/scsi_rpc.o 00:04:49.816 CC lib/scsi/task.o 00:04:49.816 CC lib/ftl/ftl_init.o 00:04:50.074 CC lib/nvmf/ctrlr_discovery.o 00:04:50.074 CC lib/ublk/ublk_rpc.o 00:04:50.074 CC lib/ftl/ftl_layout.o 00:04:50.074 LIB libspdk_nbd.a 00:04:50.074 SO libspdk_nbd.so.7.0 00:04:50.074 CC lib/ftl/ftl_debug.o 00:04:50.074 CC lib/nvmf/ctrlr_bdev.o 00:04:50.074 SYMLINK libspdk_nbd.so 00:04:50.074 CC lib/ftl/ftl_io.o 00:04:50.074 CC lib/nvmf/subsystem.o 00:04:50.074 CC lib/ftl/ftl_sb.o 00:04:50.332 LIB libspdk_scsi.a 00:04:50.332 LIB libspdk_ublk.a 00:04:50.332 SO libspdk_ublk.so.3.0 00:04:50.332 CC lib/ftl/ftl_l2p.o 00:04:50.332 SO libspdk_scsi.so.9.0 00:04:50.332 CC lib/nvmf/nvmf.o 00:04:50.332 SYMLINK libspdk_ublk.so 00:04:50.332 CC lib/nvmf/nvmf_rpc.o 00:04:50.332 CC lib/nvmf/transport.o 00:04:50.332 CC lib/ftl/ftl_l2p_flat.o 00:04:50.332 SYMLINK libspdk_scsi.so 00:04:50.332 CC lib/ftl/ftl_nv_cache.o 00:04:50.590 CC lib/ftl/ftl_band.o 00:04:50.590 CC lib/ftl/ftl_band_ops.o 00:04:50.590 CC lib/ftl/ftl_writer.o 00:04:50.878 CC lib/ftl/ftl_rq.o 00:04:50.878 CC lib/ftl/ftl_reloc.o 00:04:50.878 CC lib/ftl/ftl_l2p_cache.o 00:04:50.878 CC lib/ftl/ftl_p2l.o 00:04:50.878 CC lib/ftl/ftl_p2l_log.o 00:04:51.137 CC lib/ftl/mngt/ftl_mngt.o 00:04:51.137 CC lib/nvmf/tcp.o 00:04:51.137 CC lib/nvmf/stubs.o 00:04:51.137 CC lib/nvmf/mdns_server.o 00:04:51.137 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:51.395 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:51.395 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:51.395 CC lib/nvmf/rdma.o 00:04:51.395 CC lib/nvmf/auth.o 00:04:51.395 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:51.395 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:51.395 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:51.654 CC lib/iscsi/conn.o 00:04:51.654 CC lib/iscsi/init_grp.o 00:04:51.654 CC lib/iscsi/iscsi.o 00:04:51.654 CC lib/vhost/vhost.o 00:04:51.654 CC lib/iscsi/param.o 00:04:51.654 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:51.654 CC lib/vhost/vhost_rpc.o 00:04:51.913 CC lib/iscsi/portal_grp.o 00:04:51.913 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:52.170 CC lib/iscsi/tgt_node.o 00:04:52.170 CC lib/iscsi/iscsi_subsystem.o 00:04:52.170 CC lib/vhost/vhost_scsi.o 00:04:52.170 CC lib/vhost/vhost_blk.o 00:04:52.170 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:52.428 CC lib/vhost/rte_vhost_user.o 00:04:52.428 CC lib/iscsi/iscsi_rpc.o 00:04:52.428 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:52.687 CC lib/iscsi/task.o 00:04:52.687 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:52.687 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:52.687 CC lib/ftl/utils/ftl_conf.o 00:04:52.687 CC lib/ftl/utils/ftl_md.o 00:04:52.946 CC lib/ftl/utils/ftl_mempool.o 00:04:52.946 CC lib/ftl/utils/ftl_bitmap.o 00:04:52.946 CC lib/ftl/utils/ftl_property.o 00:04:52.946 LIB libspdk_iscsi.a 00:04:52.946 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:52.946 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:53.205 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:53.205 SO libspdk_iscsi.so.8.0 00:04:53.205 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:53.205 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:53.205 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:53.205 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:53.205 SYMLINK libspdk_iscsi.so 00:04:53.205 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:53.205 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:53.205 LIB libspdk_nvmf.a 00:04:53.205 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:53.464 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:53.464 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:53.464 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:53.464 SO libspdk_nvmf.so.20.0 00:04:53.464 CC lib/ftl/base/ftl_base_dev.o 00:04:53.464 CC lib/ftl/base/ftl_base_bdev.o 00:04:53.464 LIB libspdk_vhost.a 00:04:53.464 CC lib/ftl/ftl_trace.o 00:04:53.464 SO libspdk_vhost.so.8.0 00:04:53.723 SYMLINK libspdk_nvmf.so 00:04:53.723 SYMLINK libspdk_vhost.so 00:04:53.723 LIB libspdk_ftl.a 00:04:53.982 SO libspdk_ftl.so.9.0 00:04:54.241 SYMLINK libspdk_ftl.so 00:04:54.500 CC module/env_dpdk/env_dpdk_rpc.o 00:04:54.759 CC module/accel/ioat/accel_ioat.o 00:04:54.759 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:54.759 CC module/sock/posix/posix.o 00:04:54.759 CC module/blob/bdev/blob_bdev.o 00:04:54.759 CC module/scheduler/gscheduler/gscheduler.o 00:04:54.759 CC module/accel/error/accel_error.o 00:04:54.759 CC module/keyring/file/keyring.o 00:04:54.759 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:54.759 CC module/fsdev/aio/fsdev_aio.o 00:04:54.759 LIB libspdk_env_dpdk_rpc.a 00:04:54.759 SO libspdk_env_dpdk_rpc.so.6.0 00:04:54.759 SYMLINK libspdk_env_dpdk_rpc.so 00:04:54.759 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:54.759 LIB libspdk_scheduler_gscheduler.a 00:04:54.759 CC module/keyring/file/keyring_rpc.o 00:04:54.759 LIB libspdk_scheduler_dpdk_governor.a 00:04:54.759 CC module/accel/ioat/accel_ioat_rpc.o 00:04:54.759 SO libspdk_scheduler_gscheduler.so.4.0 00:04:54.759 LIB libspdk_scheduler_dynamic.a 00:04:55.018 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:55.018 CC module/accel/error/accel_error_rpc.o 00:04:55.018 SO libspdk_scheduler_dynamic.so.4.0 00:04:55.018 SYMLINK libspdk_scheduler_gscheduler.so 00:04:55.018 CC module/fsdev/aio/linux_aio_mgr.o 00:04:55.018 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:55.018 LIB libspdk_blob_bdev.a 00:04:55.018 SYMLINK libspdk_scheduler_dynamic.so 00:04:55.018 LIB libspdk_keyring_file.a 00:04:55.018 SO libspdk_blob_bdev.so.12.0 00:04:55.018 LIB libspdk_accel_ioat.a 00:04:55.018 SO libspdk_keyring_file.so.2.0 00:04:55.018 LIB libspdk_accel_error.a 00:04:55.018 SO libspdk_accel_ioat.so.6.0 00:04:55.018 SYMLINK libspdk_blob_bdev.so 00:04:55.018 SO libspdk_accel_error.so.2.0 00:04:55.018 SYMLINK libspdk_keyring_file.so 00:04:55.018 SYMLINK libspdk_accel_ioat.so 00:04:55.018 SYMLINK libspdk_accel_error.so 00:04:55.018 CC module/accel/dsa/accel_dsa.o 00:04:55.018 CC module/accel/dsa/accel_dsa_rpc.o 00:04:55.277 CC module/accel/iaa/accel_iaa.o 00:04:55.277 CC module/keyring/linux/keyring.o 00:04:55.277 CC module/accel/iaa/accel_iaa_rpc.o 00:04:55.277 CC module/sock/uring/uring.o 00:04:55.277 CC module/keyring/linux/keyring_rpc.o 00:04:55.277 CC module/bdev/delay/vbdev_delay.o 00:04:55.277 LIB libspdk_fsdev_aio.a 00:04:55.277 LIB libspdk_accel_iaa.a 00:04:55.277 SO libspdk_fsdev_aio.so.1.0 00:04:55.535 CC module/blobfs/bdev/blobfs_bdev.o 00:04:55.535 SO libspdk_accel_iaa.so.3.0 00:04:55.535 LIB libspdk_sock_posix.a 00:04:55.535 LIB libspdk_accel_dsa.a 00:04:55.536 LIB libspdk_keyring_linux.a 00:04:55.536 SO libspdk_sock_posix.so.6.0 00:04:55.536 SO libspdk_accel_dsa.so.5.0 00:04:55.536 SYMLINK libspdk_fsdev_aio.so 00:04:55.536 SYMLINK libspdk_accel_iaa.so 00:04:55.536 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:55.536 CC module/bdev/error/vbdev_error.o 00:04:55.536 CC module/bdev/gpt/gpt.o 00:04:55.536 SO libspdk_keyring_linux.so.1.0 00:04:55.536 SYMLINK libspdk_sock_posix.so 00:04:55.536 SYMLINK libspdk_accel_dsa.so 00:04:55.536 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:55.536 SYMLINK libspdk_keyring_linux.so 00:04:55.536 CC module/bdev/error/vbdev_error_rpc.o 00:04:55.794 LIB libspdk_blobfs_bdev.a 00:04:55.794 CC module/bdev/gpt/vbdev_gpt.o 00:04:55.794 CC module/bdev/lvol/vbdev_lvol.o 00:04:55.794 SO libspdk_blobfs_bdev.so.6.0 00:04:55.794 CC module/bdev/null/bdev_null.o 00:04:55.794 CC module/bdev/malloc/bdev_malloc.o 00:04:55.794 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:55.794 LIB libspdk_bdev_error.a 00:04:55.794 LIB libspdk_bdev_delay.a 00:04:55.794 SYMLINK libspdk_blobfs_bdev.so 00:04:55.794 SO libspdk_bdev_error.so.6.0 00:04:55.794 SO libspdk_bdev_delay.so.6.0 00:04:55.794 CC module/bdev/nvme/bdev_nvme.o 00:04:55.794 SYMLINK libspdk_bdev_error.so 00:04:55.794 SYMLINK libspdk_bdev_delay.so 00:04:56.052 CC module/bdev/passthru/vbdev_passthru.o 00:04:56.052 LIB libspdk_sock_uring.a 00:04:56.052 LIB libspdk_bdev_gpt.a 00:04:56.052 SO libspdk_sock_uring.so.5.0 00:04:56.052 CC module/bdev/null/bdev_null_rpc.o 00:04:56.052 SO libspdk_bdev_gpt.so.6.0 00:04:56.052 CC module/bdev/raid/bdev_raid.o 00:04:56.052 CC module/bdev/split/vbdev_split.o 00:04:56.052 SYMLINK libspdk_sock_uring.so 00:04:56.052 SYMLINK libspdk_bdev_gpt.so 00:04:56.052 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:56.052 CC module/bdev/raid/bdev_raid_rpc.o 00:04:56.052 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:56.052 CC module/bdev/raid/bdev_raid_sb.o 00:04:56.310 LIB libspdk_bdev_null.a 00:04:56.310 SO libspdk_bdev_null.so.6.0 00:04:56.310 LIB libspdk_bdev_lvol.a 00:04:56.310 SO libspdk_bdev_lvol.so.6.0 00:04:56.310 SYMLINK libspdk_bdev_null.so 00:04:56.310 LIB libspdk_bdev_malloc.a 00:04:56.310 CC module/bdev/raid/raid0.o 00:04:56.310 SO libspdk_bdev_malloc.so.6.0 00:04:56.310 SYMLINK libspdk_bdev_lvol.so 00:04:56.310 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:56.310 CC module/bdev/split/vbdev_split_rpc.o 00:04:56.310 SYMLINK libspdk_bdev_malloc.so 00:04:56.568 CC module/bdev/raid/raid1.o 00:04:56.568 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:56.568 CC module/bdev/uring/bdev_uring.o 00:04:56.568 LIB libspdk_bdev_passthru.a 00:04:56.568 SO libspdk_bdev_passthru.so.6.0 00:04:56.568 LIB libspdk_bdev_split.a 00:04:56.568 CC module/bdev/aio/bdev_aio.o 00:04:56.568 SO libspdk_bdev_split.so.6.0 00:04:56.568 SYMLINK libspdk_bdev_passthru.so 00:04:56.568 CC module/bdev/aio/bdev_aio_rpc.o 00:04:56.568 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:56.568 SYMLINK libspdk_bdev_split.so 00:04:56.568 CC module/bdev/nvme/nvme_rpc.o 00:04:56.827 CC module/bdev/uring/bdev_uring_rpc.o 00:04:56.827 CC module/bdev/raid/concat.o 00:04:56.827 CC module/bdev/nvme/bdev_mdns_client.o 00:04:56.827 CC module/bdev/nvme/vbdev_opal.o 00:04:56.827 LIB libspdk_bdev_zone_block.a 00:04:56.827 SO libspdk_bdev_zone_block.so.6.0 00:04:56.827 LIB libspdk_bdev_aio.a 00:04:56.827 LIB libspdk_bdev_uring.a 00:04:56.827 SYMLINK libspdk_bdev_zone_block.so 00:04:57.085 SO libspdk_bdev_aio.so.6.0 00:04:57.085 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:57.085 SO libspdk_bdev_uring.so.6.0 00:04:57.085 SYMLINK libspdk_bdev_aio.so 00:04:57.086 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:57.086 SYMLINK libspdk_bdev_uring.so 00:04:57.086 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:57.086 CC module/bdev/ftl/bdev_ftl.o 00:04:57.086 LIB libspdk_bdev_raid.a 00:04:57.086 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:57.086 CC module/bdev/iscsi/bdev_iscsi.o 00:04:57.086 SO libspdk_bdev_raid.so.6.0 00:04:57.086 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:57.344 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:57.344 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:57.344 SYMLINK libspdk_bdev_raid.so 00:04:57.344 LIB libspdk_bdev_ftl.a 00:04:57.344 SO libspdk_bdev_ftl.so.6.0 00:04:57.603 LIB libspdk_bdev_iscsi.a 00:04:57.603 SO libspdk_bdev_iscsi.so.6.0 00:04:57.603 SYMLINK libspdk_bdev_ftl.so 00:04:57.603 SYMLINK libspdk_bdev_iscsi.so 00:04:57.862 LIB libspdk_bdev_virtio.a 00:04:57.862 SO libspdk_bdev_virtio.so.6.0 00:04:58.120 SYMLINK libspdk_bdev_virtio.so 00:04:58.688 LIB libspdk_bdev_nvme.a 00:04:58.688 SO libspdk_bdev_nvme.so.7.1 00:04:58.948 SYMLINK libspdk_bdev_nvme.so 00:04:59.516 CC module/event/subsystems/fsdev/fsdev.o 00:04:59.517 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:59.517 CC module/event/subsystems/iobuf/iobuf.o 00:04:59.517 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:59.517 CC module/event/subsystems/scheduler/scheduler.o 00:04:59.517 CC module/event/subsystems/vmd/vmd.o 00:04:59.517 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:59.517 CC module/event/subsystems/sock/sock.o 00:04:59.517 CC module/event/subsystems/keyring/keyring.o 00:04:59.517 LIB libspdk_event_scheduler.a 00:04:59.517 LIB libspdk_event_vhost_blk.a 00:04:59.517 LIB libspdk_event_sock.a 00:04:59.517 LIB libspdk_event_fsdev.a 00:04:59.517 LIB libspdk_event_iobuf.a 00:04:59.517 SO libspdk_event_scheduler.so.4.0 00:04:59.517 LIB libspdk_event_keyring.a 00:04:59.517 LIB libspdk_event_vmd.a 00:04:59.517 SO libspdk_event_vhost_blk.so.3.0 00:04:59.517 SO libspdk_event_sock.so.5.0 00:04:59.517 SO libspdk_event_fsdev.so.1.0 00:04:59.517 SO libspdk_event_iobuf.so.3.0 00:04:59.775 SO libspdk_event_keyring.so.1.0 00:04:59.775 SO libspdk_event_vmd.so.6.0 00:04:59.775 SYMLINK libspdk_event_scheduler.so 00:04:59.775 SYMLINK libspdk_event_vhost_blk.so 00:04:59.776 SYMLINK libspdk_event_sock.so 00:04:59.776 SYMLINK libspdk_event_keyring.so 00:04:59.776 SYMLINK libspdk_event_fsdev.so 00:04:59.776 SYMLINK libspdk_event_vmd.so 00:04:59.776 SYMLINK libspdk_event_iobuf.so 00:05:00.042 CC module/event/subsystems/accel/accel.o 00:05:00.330 LIB libspdk_event_accel.a 00:05:00.330 SO libspdk_event_accel.so.6.0 00:05:00.330 SYMLINK libspdk_event_accel.so 00:05:00.596 CC module/event/subsystems/bdev/bdev.o 00:05:00.855 LIB libspdk_event_bdev.a 00:05:00.855 SO libspdk_event_bdev.so.6.0 00:05:00.855 SYMLINK libspdk_event_bdev.so 00:05:01.114 CC module/event/subsystems/scsi/scsi.o 00:05:01.114 CC module/event/subsystems/nbd/nbd.o 00:05:01.114 CC module/event/subsystems/ublk/ublk.o 00:05:01.114 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:01.114 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:01.372 LIB libspdk_event_nbd.a 00:05:01.372 LIB libspdk_event_scsi.a 00:05:01.372 SO libspdk_event_nbd.so.6.0 00:05:01.372 LIB libspdk_event_ublk.a 00:05:01.372 SO libspdk_event_scsi.so.6.0 00:05:01.372 SO libspdk_event_ublk.so.3.0 00:05:01.372 SYMLINK libspdk_event_nbd.so 00:05:01.372 SYMLINK libspdk_event_scsi.so 00:05:01.372 LIB libspdk_event_nvmf.a 00:05:01.372 SYMLINK libspdk_event_ublk.so 00:05:01.631 SO libspdk_event_nvmf.so.6.0 00:05:01.631 SYMLINK libspdk_event_nvmf.so 00:05:01.631 CC module/event/subsystems/iscsi/iscsi.o 00:05:01.631 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:01.890 LIB libspdk_event_vhost_scsi.a 00:05:01.890 LIB libspdk_event_iscsi.a 00:05:01.890 SO libspdk_event_vhost_scsi.so.3.0 00:05:01.890 SO libspdk_event_iscsi.so.6.0 00:05:01.890 SYMLINK libspdk_event_vhost_scsi.so 00:05:01.890 SYMLINK libspdk_event_iscsi.so 00:05:02.149 SO libspdk.so.6.0 00:05:02.149 SYMLINK libspdk.so 00:05:02.408 CC app/trace_record/trace_record.o 00:05:02.408 CC app/spdk_lspci/spdk_lspci.o 00:05:02.408 CXX app/trace/trace.o 00:05:02.408 CC app/spdk_nvme_identify/identify.o 00:05:02.408 CC app/spdk_nvme_perf/perf.o 00:05:02.408 CC app/iscsi_tgt/iscsi_tgt.o 00:05:02.408 CC app/nvmf_tgt/nvmf_main.o 00:05:02.408 CC app/spdk_tgt/spdk_tgt.o 00:05:02.408 CC examples/util/zipf/zipf.o 00:05:02.408 CC test/thread/poller_perf/poller_perf.o 00:05:02.668 LINK spdk_lspci 00:05:02.668 LINK nvmf_tgt 00:05:02.668 LINK zipf 00:05:02.668 LINK poller_perf 00:05:02.668 LINK iscsi_tgt 00:05:02.668 LINK spdk_trace_record 00:05:02.668 LINK spdk_tgt 00:05:02.928 CC app/spdk_nvme_discover/discovery_aer.o 00:05:02.928 LINK spdk_trace 00:05:02.928 TEST_HEADER include/spdk/accel.h 00:05:02.928 TEST_HEADER include/spdk/accel_module.h 00:05:02.928 TEST_HEADER include/spdk/assert.h 00:05:03.188 LINK spdk_nvme_discover 00:05:03.188 CC app/spdk_top/spdk_top.o 00:05:03.188 TEST_HEADER include/spdk/barrier.h 00:05:03.188 TEST_HEADER include/spdk/base64.h 00:05:03.188 TEST_HEADER include/spdk/bdev.h 00:05:03.188 TEST_HEADER include/spdk/bdev_module.h 00:05:03.188 TEST_HEADER include/spdk/bdev_zone.h 00:05:03.188 TEST_HEADER include/spdk/bit_array.h 00:05:03.188 TEST_HEADER include/spdk/bit_pool.h 00:05:03.188 TEST_HEADER include/spdk/blob_bdev.h 00:05:03.188 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:03.188 TEST_HEADER include/spdk/blobfs.h 00:05:03.188 TEST_HEADER include/spdk/blob.h 00:05:03.188 TEST_HEADER include/spdk/conf.h 00:05:03.188 TEST_HEADER include/spdk/config.h 00:05:03.188 CC examples/ioat/perf/perf.o 00:05:03.188 TEST_HEADER include/spdk/cpuset.h 00:05:03.188 TEST_HEADER include/spdk/crc16.h 00:05:03.188 CC examples/vmd/lsvmd/lsvmd.o 00:05:03.188 TEST_HEADER include/spdk/crc32.h 00:05:03.188 TEST_HEADER include/spdk/crc64.h 00:05:03.188 TEST_HEADER include/spdk/dif.h 00:05:03.188 TEST_HEADER include/spdk/dma.h 00:05:03.188 TEST_HEADER include/spdk/endian.h 00:05:03.188 TEST_HEADER include/spdk/env_dpdk.h 00:05:03.188 TEST_HEADER include/spdk/env.h 00:05:03.188 TEST_HEADER include/spdk/event.h 00:05:03.188 TEST_HEADER include/spdk/fd_group.h 00:05:03.188 TEST_HEADER include/spdk/fd.h 00:05:03.188 TEST_HEADER include/spdk/file.h 00:05:03.188 TEST_HEADER include/spdk/fsdev.h 00:05:03.188 TEST_HEADER include/spdk/fsdev_module.h 00:05:03.188 TEST_HEADER include/spdk/ftl.h 00:05:03.188 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:03.188 TEST_HEADER include/spdk/gpt_spec.h 00:05:03.188 TEST_HEADER include/spdk/hexlify.h 00:05:03.188 TEST_HEADER include/spdk/histogram_data.h 00:05:03.188 CC examples/vmd/led/led.o 00:05:03.188 TEST_HEADER include/spdk/idxd.h 00:05:03.188 TEST_HEADER include/spdk/idxd_spec.h 00:05:03.188 TEST_HEADER include/spdk/init.h 00:05:03.188 TEST_HEADER include/spdk/ioat.h 00:05:03.188 TEST_HEADER include/spdk/ioat_spec.h 00:05:03.188 CC test/dma/test_dma/test_dma.o 00:05:03.188 TEST_HEADER include/spdk/iscsi_spec.h 00:05:03.188 TEST_HEADER include/spdk/json.h 00:05:03.188 TEST_HEADER include/spdk/jsonrpc.h 00:05:03.188 CC test/app/bdev_svc/bdev_svc.o 00:05:03.188 TEST_HEADER include/spdk/keyring.h 00:05:03.188 TEST_HEADER include/spdk/keyring_module.h 00:05:03.188 TEST_HEADER include/spdk/likely.h 00:05:03.188 TEST_HEADER include/spdk/log.h 00:05:03.188 TEST_HEADER include/spdk/lvol.h 00:05:03.188 TEST_HEADER include/spdk/md5.h 00:05:03.188 TEST_HEADER include/spdk/memory.h 00:05:03.188 TEST_HEADER include/spdk/mmio.h 00:05:03.188 TEST_HEADER include/spdk/nbd.h 00:05:03.188 TEST_HEADER include/spdk/net.h 00:05:03.188 TEST_HEADER include/spdk/notify.h 00:05:03.188 TEST_HEADER include/spdk/nvme.h 00:05:03.188 TEST_HEADER include/spdk/nvme_intel.h 00:05:03.188 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:03.188 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:03.188 TEST_HEADER include/spdk/nvme_spec.h 00:05:03.188 TEST_HEADER include/spdk/nvme_zns.h 00:05:03.188 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:03.188 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:03.188 TEST_HEADER include/spdk/nvmf.h 00:05:03.188 TEST_HEADER include/spdk/nvmf_spec.h 00:05:03.188 TEST_HEADER include/spdk/nvmf_transport.h 00:05:03.188 TEST_HEADER include/spdk/opal.h 00:05:03.188 TEST_HEADER include/spdk/opal_spec.h 00:05:03.188 TEST_HEADER include/spdk/pci_ids.h 00:05:03.188 TEST_HEADER include/spdk/pipe.h 00:05:03.188 TEST_HEADER include/spdk/queue.h 00:05:03.188 LINK spdk_nvme_identify 00:05:03.188 TEST_HEADER include/spdk/reduce.h 00:05:03.188 TEST_HEADER include/spdk/rpc.h 00:05:03.188 TEST_HEADER include/spdk/scheduler.h 00:05:03.188 LINK lsvmd 00:05:03.188 TEST_HEADER include/spdk/scsi.h 00:05:03.188 TEST_HEADER include/spdk/scsi_spec.h 00:05:03.188 TEST_HEADER include/spdk/sock.h 00:05:03.188 TEST_HEADER include/spdk/stdinc.h 00:05:03.188 TEST_HEADER include/spdk/string.h 00:05:03.188 TEST_HEADER include/spdk/thread.h 00:05:03.188 TEST_HEADER include/spdk/trace.h 00:05:03.188 TEST_HEADER include/spdk/trace_parser.h 00:05:03.188 TEST_HEADER include/spdk/tree.h 00:05:03.188 TEST_HEADER include/spdk/ublk.h 00:05:03.188 TEST_HEADER include/spdk/util.h 00:05:03.188 TEST_HEADER include/spdk/uuid.h 00:05:03.188 TEST_HEADER include/spdk/version.h 00:05:03.188 LINK led 00:05:03.188 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:03.448 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:03.448 TEST_HEADER include/spdk/vhost.h 00:05:03.448 TEST_HEADER include/spdk/vmd.h 00:05:03.448 TEST_HEADER include/spdk/xor.h 00:05:03.448 TEST_HEADER include/spdk/zipf.h 00:05:03.448 CXX test/cpp_headers/accel.o 00:05:03.448 LINK spdk_nvme_perf 00:05:03.448 LINK bdev_svc 00:05:03.448 LINK ioat_perf 00:05:03.448 CXX test/cpp_headers/accel_module.o 00:05:03.448 CC examples/idxd/perf/perf.o 00:05:03.448 CXX test/cpp_headers/assert.o 00:05:03.448 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:03.707 CC examples/ioat/verify/verify.o 00:05:03.707 LINK test_dma 00:05:03.707 CC examples/thread/thread/thread_ex.o 00:05:03.707 CXX test/cpp_headers/barrier.o 00:05:03.707 CC examples/sock/hello_world/hello_sock.o 00:05:03.707 CC test/app/histogram_perf/histogram_perf.o 00:05:03.707 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:03.707 LINK interrupt_tgt 00:05:03.967 LINK idxd_perf 00:05:03.967 LINK verify 00:05:03.967 LINK spdk_top 00:05:03.967 LINK histogram_perf 00:05:03.967 CXX test/cpp_headers/base64.o 00:05:03.967 LINK thread 00:05:03.967 CXX test/cpp_headers/bdev.o 00:05:03.967 LINK hello_sock 00:05:03.967 CC test/app/jsoncat/jsoncat.o 00:05:03.967 CC test/app/stub/stub.o 00:05:04.226 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:04.226 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:04.226 LINK nvme_fuzz 00:05:04.226 CC app/spdk_dd/spdk_dd.o 00:05:04.226 CXX test/cpp_headers/bdev_module.o 00:05:04.226 LINK jsoncat 00:05:04.226 LINK stub 00:05:04.226 CC app/fio/nvme/fio_plugin.o 00:05:04.226 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:04.485 CC app/fio/bdev/fio_plugin.o 00:05:04.485 CC examples/nvme/hello_world/hello_world.o 00:05:04.485 CXX test/cpp_headers/bdev_zone.o 00:05:04.485 CC app/vhost/vhost.o 00:05:04.485 CC examples/accel/perf/accel_perf.o 00:05:04.485 CXX test/cpp_headers/bit_array.o 00:05:04.743 LINK hello_world 00:05:04.743 LINK spdk_dd 00:05:04.743 CC test/env/mem_callbacks/mem_callbacks.o 00:05:04.743 LINK vhost_fuzz 00:05:04.743 LINK vhost 00:05:04.743 CXX test/cpp_headers/bit_pool.o 00:05:05.003 CC examples/nvme/reconnect/reconnect.o 00:05:05.003 LINK spdk_bdev 00:05:05.003 CC test/env/vtophys/vtophys.o 00:05:05.003 LINK spdk_nvme 00:05:05.003 CXX test/cpp_headers/blob_bdev.o 00:05:05.003 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:05.003 CC test/env/memory/memory_ut.o 00:05:05.003 CXX test/cpp_headers/blobfs_bdev.o 00:05:05.003 LINK accel_perf 00:05:05.003 CXX test/cpp_headers/blobfs.o 00:05:05.003 LINK vtophys 00:05:05.263 LINK env_dpdk_post_init 00:05:05.263 CXX test/cpp_headers/blob.o 00:05:05.263 LINK reconnect 00:05:05.263 CXX test/cpp_headers/conf.o 00:05:05.264 CC test/rpc_client/rpc_client_test.o 00:05:05.264 LINK mem_callbacks 00:05:05.264 CC test/event/event_perf/event_perf.o 00:05:05.264 CC test/nvme/aer/aer.o 00:05:05.522 CXX test/cpp_headers/config.o 00:05:05.522 CXX test/cpp_headers/cpuset.o 00:05:05.522 LINK event_perf 00:05:05.522 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:05.522 LINK rpc_client_test 00:05:05.522 CC test/accel/dif/dif.o 00:05:05.782 CXX test/cpp_headers/crc16.o 00:05:05.782 CC test/blobfs/mkfs/mkfs.o 00:05:05.782 LINK aer 00:05:05.782 CC test/lvol/esnap/esnap.o 00:05:05.782 CC test/event/reactor/reactor.o 00:05:05.782 CC test/event/reactor_perf/reactor_perf.o 00:05:05.782 LINK iscsi_fuzz 00:05:05.782 CXX test/cpp_headers/crc32.o 00:05:05.782 LINK mkfs 00:05:06.042 LINK reactor 00:05:06.042 LINK reactor_perf 00:05:06.042 CC test/nvme/reset/reset.o 00:05:06.042 LINK nvme_manage 00:05:06.042 CXX test/cpp_headers/crc64.o 00:05:06.042 CXX test/cpp_headers/dif.o 00:05:06.042 CXX test/cpp_headers/dma.o 00:05:06.042 CC test/event/app_repeat/app_repeat.o 00:05:06.301 LINK memory_ut 00:05:06.301 CC test/event/scheduler/scheduler.o 00:05:06.301 LINK reset 00:05:06.301 CXX test/cpp_headers/endian.o 00:05:06.301 LINK dif 00:05:06.301 LINK app_repeat 00:05:06.301 CC examples/nvme/arbitration/arbitration.o 00:05:06.301 CC examples/nvme/hotplug/hotplug.o 00:05:06.301 CC test/env/pci/pci_ut.o 00:05:06.560 CXX test/cpp_headers/env_dpdk.o 00:05:06.560 CC test/nvme/sgl/sgl.o 00:05:06.560 CXX test/cpp_headers/env.o 00:05:06.560 LINK scheduler 00:05:06.560 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:06.560 CXX test/cpp_headers/event.o 00:05:06.560 LINK hotplug 00:05:06.560 CXX test/cpp_headers/fd_group.o 00:05:06.819 LINK cmb_copy 00:05:06.819 LINK arbitration 00:05:06.819 CC test/nvme/e2edp/nvme_dp.o 00:05:06.819 CC test/nvme/overhead/overhead.o 00:05:06.819 LINK sgl 00:05:06.819 CC test/nvme/err_injection/err_injection.o 00:05:06.819 LINK pci_ut 00:05:06.819 CXX test/cpp_headers/fd.o 00:05:06.819 CC test/nvme/startup/startup.o 00:05:07.078 CC examples/nvme/abort/abort.o 00:05:07.078 CC test/nvme/reserve/reserve.o 00:05:07.078 LINK err_injection 00:05:07.078 LINK nvme_dp 00:05:07.078 CC test/nvme/simple_copy/simple_copy.o 00:05:07.078 LINK overhead 00:05:07.078 CXX test/cpp_headers/file.o 00:05:07.078 LINK startup 00:05:07.078 CC test/nvme/connect_stress/connect_stress.o 00:05:07.078 CXX test/cpp_headers/fsdev.o 00:05:07.337 CXX test/cpp_headers/fsdev_module.o 00:05:07.337 LINK reserve 00:05:07.337 LINK simple_copy 00:05:07.337 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:07.337 CC test/nvme/boot_partition/boot_partition.o 00:05:07.337 LINK connect_stress 00:05:07.337 CC test/bdev/bdevio/bdevio.o 00:05:07.337 CXX test/cpp_headers/ftl.o 00:05:07.337 LINK abort 00:05:07.337 CXX test/cpp_headers/fuse_dispatcher.o 00:05:07.337 CXX test/cpp_headers/gpt_spec.o 00:05:07.596 LINK pmr_persistence 00:05:07.596 CXX test/cpp_headers/hexlify.o 00:05:07.596 CC examples/blob/hello_world/hello_blob.o 00:05:07.596 LINK boot_partition 00:05:07.596 CXX test/cpp_headers/histogram_data.o 00:05:07.596 CXX test/cpp_headers/idxd.o 00:05:07.596 CXX test/cpp_headers/idxd_spec.o 00:05:07.596 CC examples/blob/cli/blobcli.o 00:05:07.864 CXX test/cpp_headers/init.o 00:05:07.864 LINK bdevio 00:05:07.864 LINK hello_blob 00:05:07.864 CXX test/cpp_headers/ioat.o 00:05:07.864 CC test/nvme/compliance/nvme_compliance.o 00:05:07.864 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:07.864 CXX test/cpp_headers/ioat_spec.o 00:05:07.864 CXX test/cpp_headers/iscsi_spec.o 00:05:08.122 CC examples/bdev/hello_world/hello_bdev.o 00:05:08.122 CXX test/cpp_headers/json.o 00:05:08.122 CC test/nvme/fused_ordering/fused_ordering.o 00:05:08.122 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:08.122 CC test/nvme/fdp/fdp.o 00:05:08.122 CC test/nvme/cuse/cuse.o 00:05:08.122 LINK nvme_compliance 00:05:08.122 LINK hello_fsdev 00:05:08.122 LINK blobcli 00:05:08.380 LINK hello_bdev 00:05:08.380 CXX test/cpp_headers/jsonrpc.o 00:05:08.380 LINK doorbell_aers 00:05:08.380 LINK fused_ordering 00:05:08.380 CXX test/cpp_headers/keyring.o 00:05:08.380 CXX test/cpp_headers/keyring_module.o 00:05:08.380 CC examples/bdev/bdevperf/bdevperf.o 00:05:08.380 CXX test/cpp_headers/likely.o 00:05:08.380 CXX test/cpp_headers/log.o 00:05:08.380 CXX test/cpp_headers/lvol.o 00:05:08.380 CXX test/cpp_headers/md5.o 00:05:08.380 LINK fdp 00:05:08.639 CXX test/cpp_headers/memory.o 00:05:08.639 CXX test/cpp_headers/mmio.o 00:05:08.639 CXX test/cpp_headers/nbd.o 00:05:08.639 CXX test/cpp_headers/net.o 00:05:08.639 CXX test/cpp_headers/notify.o 00:05:08.639 CXX test/cpp_headers/nvme.o 00:05:08.639 CXX test/cpp_headers/nvme_intel.o 00:05:08.639 CXX test/cpp_headers/nvme_ocssd.o 00:05:08.898 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:08.898 CXX test/cpp_headers/nvme_spec.o 00:05:08.898 CXX test/cpp_headers/nvme_zns.o 00:05:08.898 CXX test/cpp_headers/nvmf_cmd.o 00:05:08.898 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:08.898 CXX test/cpp_headers/nvmf.o 00:05:08.898 CXX test/cpp_headers/nvmf_spec.o 00:05:08.898 CXX test/cpp_headers/nvmf_transport.o 00:05:09.156 CXX test/cpp_headers/opal.o 00:05:09.156 CXX test/cpp_headers/opal_spec.o 00:05:09.156 CXX test/cpp_headers/pci_ids.o 00:05:09.156 CXX test/cpp_headers/pipe.o 00:05:09.156 CXX test/cpp_headers/queue.o 00:05:09.156 CXX test/cpp_headers/reduce.o 00:05:09.156 CXX test/cpp_headers/rpc.o 00:05:09.156 CXX test/cpp_headers/scheduler.o 00:05:09.156 CXX test/cpp_headers/scsi.o 00:05:09.156 CXX test/cpp_headers/scsi_spec.o 00:05:09.156 CXX test/cpp_headers/sock.o 00:05:09.156 CXX test/cpp_headers/stdinc.o 00:05:09.414 LINK bdevperf 00:05:09.414 CXX test/cpp_headers/string.o 00:05:09.414 CXX test/cpp_headers/thread.o 00:05:09.414 CXX test/cpp_headers/trace.o 00:05:09.414 CXX test/cpp_headers/trace_parser.o 00:05:09.414 CXX test/cpp_headers/tree.o 00:05:09.414 CXX test/cpp_headers/ublk.o 00:05:09.414 CXX test/cpp_headers/util.o 00:05:09.414 CXX test/cpp_headers/uuid.o 00:05:09.414 LINK cuse 00:05:09.414 CXX test/cpp_headers/version.o 00:05:09.673 CXX test/cpp_headers/vfio_user_pci.o 00:05:09.673 CXX test/cpp_headers/vfio_user_spec.o 00:05:09.673 CXX test/cpp_headers/vhost.o 00:05:09.673 CXX test/cpp_headers/vmd.o 00:05:09.673 CXX test/cpp_headers/xor.o 00:05:09.673 CXX test/cpp_headers/zipf.o 00:05:09.673 CC examples/nvmf/nvmf/nvmf.o 00:05:09.937 LINK nvmf 00:05:11.309 LINK esnap 00:05:11.876 00:05:11.876 real 1m27.936s 00:05:11.876 user 8m5.552s 00:05:11.876 sys 1m43.503s 00:05:11.876 ************************************ 00:05:11.876 END TEST make 00:05:11.876 ************************************ 00:05:11.876 09:42:36 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:11.876 09:42:36 make -- common/autotest_common.sh@10 -- $ set +x 00:05:11.876 09:42:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:11.876 09:42:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:11.876 09:42:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:11.876 09:42:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.876 09:42:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:11.876 09:42:36 -- pm/common@44 -- $ pid=5251 00:05:11.876 09:42:36 -- pm/common@50 -- $ kill -TERM 5251 00:05:11.876 09:42:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.876 09:42:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:11.877 09:42:36 -- pm/common@44 -- $ pid=5253 00:05:11.877 09:42:36 -- pm/common@50 -- $ kill -TERM 5253 00:05:11.877 09:42:36 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:11.877 09:42:36 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:11.877 09:42:37 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.877 09:42:37 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.877 09:42:37 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.136 09:42:37 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.136 09:42:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.136 09:42:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.136 09:42:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.136 09:42:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.136 09:42:37 -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.136 09:42:37 -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.136 09:42:37 -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.136 09:42:37 -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.136 09:42:37 -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.136 09:42:37 -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.136 09:42:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.136 09:42:37 -- scripts/common.sh@344 -- # case "$op" in 00:05:12.136 09:42:37 -- scripts/common.sh@345 -- # : 1 00:05:12.136 09:42:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.136 09:42:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.136 09:42:37 -- scripts/common.sh@365 -- # decimal 1 00:05:12.136 09:42:37 -- scripts/common.sh@353 -- # local d=1 00:05:12.136 09:42:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.136 09:42:37 -- scripts/common.sh@355 -- # echo 1 00:05:12.136 09:42:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.136 09:42:37 -- scripts/common.sh@366 -- # decimal 2 00:05:12.136 09:42:37 -- scripts/common.sh@353 -- # local d=2 00:05:12.136 09:42:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.136 09:42:37 -- scripts/common.sh@355 -- # echo 2 00:05:12.136 09:42:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.136 09:42:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.136 09:42:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.136 09:42:37 -- scripts/common.sh@368 -- # return 0 00:05:12.136 09:42:37 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.136 09:42:37 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.136 --rc genhtml_branch_coverage=1 00:05:12.136 --rc genhtml_function_coverage=1 00:05:12.136 --rc genhtml_legend=1 00:05:12.136 --rc geninfo_all_blocks=1 00:05:12.136 --rc geninfo_unexecuted_blocks=1 00:05:12.136 00:05:12.136 ' 00:05:12.136 09:42:37 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.136 --rc genhtml_branch_coverage=1 00:05:12.136 --rc genhtml_function_coverage=1 00:05:12.136 --rc genhtml_legend=1 00:05:12.136 --rc geninfo_all_blocks=1 00:05:12.136 --rc geninfo_unexecuted_blocks=1 00:05:12.136 00:05:12.136 ' 00:05:12.136 09:42:37 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.136 --rc genhtml_branch_coverage=1 00:05:12.136 --rc genhtml_function_coverage=1 00:05:12.136 --rc genhtml_legend=1 00:05:12.136 --rc geninfo_all_blocks=1 00:05:12.136 --rc geninfo_unexecuted_blocks=1 00:05:12.136 00:05:12.136 ' 00:05:12.136 09:42:37 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.136 --rc genhtml_branch_coverage=1 00:05:12.136 --rc genhtml_function_coverage=1 00:05:12.136 --rc genhtml_legend=1 00:05:12.136 --rc geninfo_all_blocks=1 00:05:12.136 --rc geninfo_unexecuted_blocks=1 00:05:12.136 00:05:12.136 ' 00:05:12.136 09:42:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.136 09:42:37 -- nvmf/common.sh@7 -- # uname -s 00:05:12.136 09:42:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.136 09:42:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.136 09:42:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.136 09:42:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.136 09:42:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.136 09:42:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.136 09:42:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.136 09:42:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.136 09:42:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.136 09:42:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.136 09:42:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:05:12.136 09:42:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:05:12.136 09:42:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.136 09:42:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.136 09:42:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:12.136 09:42:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.136 09:42:37 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.136 09:42:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.136 09:42:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.136 09:42:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.136 09:42:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.136 09:42:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.136 09:42:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.136 09:42:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.136 09:42:37 -- paths/export.sh@5 -- # export PATH 00:05:12.136 09:42:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.136 09:42:37 -- nvmf/common.sh@51 -- # : 0 00:05:12.136 09:42:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.136 09:42:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.136 09:42:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.136 09:42:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.136 09:42:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.136 09:42:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.136 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.136 09:42:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.136 09:42:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.136 09:42:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.136 09:42:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:12.136 09:42:37 -- spdk/autotest.sh@32 -- # uname -s 00:05:12.136 09:42:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:12.136 09:42:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:12.136 09:42:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.136 09:42:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:12.136 09:42:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.136 09:42:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:12.136 09:42:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:12.136 09:42:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:12.136 09:42:37 -- spdk/autotest.sh@48 -- # udevadm_pid=54349 00:05:12.136 09:42:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:12.136 09:42:37 -- pm/common@17 -- # local monitor 00:05:12.136 09:42:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.136 09:42:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.136 09:42:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:12.136 09:42:37 -- pm/common@25 -- # sleep 1 00:05:12.136 09:42:37 -- pm/common@21 -- # date +%s 00:05:12.136 09:42:37 -- pm/common@21 -- # date +%s 00:05:12.136 09:42:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733478157 00:05:12.136 09:42:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733478157 00:05:12.136 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733478157_collect-cpu-load.pm.log 00:05:12.137 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733478157_collect-vmstat.pm.log 00:05:13.074 09:42:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:13.075 09:42:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:13.075 09:42:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.075 09:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:13.075 09:42:38 -- spdk/autotest.sh@59 -- # create_test_list 00:05:13.075 09:42:38 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:13.075 09:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:13.075 09:42:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:13.075 09:42:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:13.334 09:42:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:13.334 09:42:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:13.334 09:42:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:13.334 09:42:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:13.334 09:42:38 -- common/autotest_common.sh@1457 -- # uname 00:05:13.334 09:42:38 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:13.334 09:42:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:13.334 09:42:38 -- common/autotest_common.sh@1477 -- # uname 00:05:13.334 09:42:38 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:13.334 09:42:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:13.334 09:42:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:13.334 lcov: LCOV version 1.15 00:05:13.334 09:42:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:28.218 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:28.218 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:43.094 09:43:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:43.094 09:43:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.094 09:43:07 -- common/autotest_common.sh@10 -- # set +x 00:05:43.094 09:43:07 -- spdk/autotest.sh@78 -- # rm -f 00:05:43.094 09:43:07 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.371 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:43.371 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:43.371 09:43:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:43.371 09:43:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:43.371 09:43:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:43.371 09:43:08 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:43.371 09:43:08 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:43.371 09:43:08 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:43.371 09:43:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:43.371 09:43:08 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:43.371 09:43:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:43.371 09:43:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:43.371 09:43:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:43.371 09:43:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:43.371 09:43:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:43.371 09:43:08 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:43.371 09:43:08 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:43.371 09:43:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:43.372 09:43:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:43.372 09:43:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:43.372 09:43:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:43.372 09:43:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:43.372 09:43:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:43.372 09:43:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:43.372 09:43:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:43.372 09:43:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:43.372 09:43:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:43.372 09:43:08 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:43.372 09:43:08 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:43.372 09:43:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:43.372 09:43:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:43.372 09:43:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:43.372 09:43:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:43.372 09:43:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.372 09:43:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.372 09:43:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:43.372 09:43:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:43.372 09:43:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:43.372 No valid GPT data, bailing 00:05:43.372 09:43:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:43.372 09:43:08 -- scripts/common.sh@394 -- # pt= 00:05:43.372 09:43:08 -- scripts/common.sh@395 -- # return 1 00:05:43.372 09:43:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:43.372 1+0 records in 00:05:43.372 1+0 records out 00:05:43.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513268 s, 204 MB/s 00:05:43.372 09:43:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.372 09:43:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.372 09:43:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:43.372 09:43:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:43.372 09:43:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:43.372 No valid GPT data, bailing 00:05:43.372 09:43:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:43.372 09:43:08 -- scripts/common.sh@394 -- # pt= 00:05:43.372 09:43:08 -- scripts/common.sh@395 -- # return 1 00:05:43.372 09:43:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:43.372 1+0 records in 00:05:43.372 1+0 records out 00:05:43.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435646 s, 241 MB/s 00:05:43.372 09:43:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.372 09:43:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.372 09:43:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:43.372 09:43:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:43.372 09:43:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:43.641 No valid GPT data, bailing 00:05:43.641 09:43:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:43.641 09:43:08 -- scripts/common.sh@394 -- # pt= 00:05:43.641 09:43:08 -- scripts/common.sh@395 -- # return 1 00:05:43.641 09:43:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:43.641 1+0 records in 00:05:43.641 1+0 records out 00:05:43.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473253 s, 222 MB/s 00:05:43.641 09:43:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.641 09:43:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.641 09:43:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:43.641 09:43:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:43.641 09:43:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:43.641 No valid GPT data, bailing 00:05:43.641 09:43:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:43.641 09:43:08 -- scripts/common.sh@394 -- # pt= 00:05:43.641 09:43:08 -- scripts/common.sh@395 -- # return 1 00:05:43.641 09:43:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:43.641 1+0 records in 00:05:43.641 1+0 records out 00:05:43.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476923 s, 220 MB/s 00:05:43.641 09:43:08 -- spdk/autotest.sh@105 -- # sync 00:05:43.900 09:43:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:43.900 09:43:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:43.900 09:43:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:45.805 09:43:11 -- spdk/autotest.sh@111 -- # uname -s 00:05:45.805 09:43:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:45.805 09:43:11 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:45.805 09:43:11 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:46.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.743 Hugepages 00:05:46.743 node hugesize free / total 00:05:46.743 node0 1048576kB 0 / 0 00:05:46.743 node0 2048kB 0 / 0 00:05:46.743 00:05:46.743 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:46.743 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:46.743 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:46.743 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:46.743 09:43:11 -- spdk/autotest.sh@117 -- # uname -s 00:05:46.743 09:43:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:46.743 09:43:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:46.743 09:43:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.569 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.569 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.569 09:43:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:48.506 09:43:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:48.506 09:43:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:48.506 09:43:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:48.506 09:43:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:48.506 09:43:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:48.506 09:43:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:48.506 09:43:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.506 09:43:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:48.506 09:43:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:48.766 09:43:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:48.766 09:43:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:48.766 09:43:13 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.024 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.024 Waiting for block devices as requested 00:05:49.024 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.283 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.283 09:43:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:49.283 09:43:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:49.283 09:43:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.283 09:43:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:49.283 09:43:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.283 09:43:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:49.283 09:43:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.283 09:43:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:49.283 09:43:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:49.283 09:43:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:49.283 09:43:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:49.283 09:43:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:49.283 09:43:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:49.283 09:43:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:49.283 09:43:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:49.283 09:43:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:49.283 09:43:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:49.283 09:43:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:49.283 09:43:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:49.283 09:43:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:49.283 09:43:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:49.283 09:43:14 -- common/autotest_common.sh@1543 -- # continue 00:05:49.283 09:43:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:49.283 09:43:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:49.283 09:43:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.283 09:43:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:49.283 09:43:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.284 09:43:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:49.284 09:43:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.284 09:43:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:49.284 09:43:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:49.284 09:43:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:49.284 09:43:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:49.284 09:43:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:49.284 09:43:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:49.284 09:43:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:49.284 09:43:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:49.284 09:43:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:49.284 09:43:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:49.284 09:43:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:49.284 09:43:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:49.284 09:43:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:49.284 09:43:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:49.284 09:43:14 -- common/autotest_common.sh@1543 -- # continue 00:05:49.284 09:43:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:49.284 09:43:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.284 09:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.284 09:43:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:49.284 09:43:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.284 09:43:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.284 09:43:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.219 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.219 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.219 09:43:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:50.219 09:43:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.219 09:43:15 -- common/autotest_common.sh@10 -- # set +x 00:05:50.219 09:43:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:50.219 09:43:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:50.219 09:43:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:50.219 09:43:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:50.219 09:43:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:50.219 09:43:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:50.219 09:43:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:50.219 09:43:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:50.219 09:43:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:50.219 09:43:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:50.219 09:43:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:50.219 09:43:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:50.219 09:43:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:50.219 09:43:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:50.219 09:43:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:50.219 09:43:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:50.219 09:43:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:50.219 09:43:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:50.219 09:43:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.219 09:43:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:50.219 09:43:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:50.219 09:43:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:50.219 09:43:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.219 09:43:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:50.219 09:43:15 -- common/autotest_common.sh@1572 -- # return 0 00:05:50.219 09:43:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:50.219 09:43:15 -- common/autotest_common.sh@1580 -- # return 0 00:05:50.219 09:43:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:50.219 09:43:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:50.219 09:43:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.219 09:43:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.220 09:43:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:50.220 09:43:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.220 09:43:15 -- common/autotest_common.sh@10 -- # set +x 00:05:50.220 09:43:15 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:50.220 09:43:15 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:50.220 09:43:15 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:50.220 09:43:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:50.220 09:43:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.220 09:43:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.220 09:43:15 -- common/autotest_common.sh@10 -- # set +x 00:05:50.220 ************************************ 00:05:50.220 START TEST env 00:05:50.220 ************************************ 00:05:50.220 09:43:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:50.479 * Looking for test storage... 00:05:50.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.479 09:43:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.479 09:43:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.479 09:43:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.479 09:43:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.479 09:43:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.479 09:43:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.479 09:43:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.479 09:43:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.479 09:43:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.479 09:43:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.479 09:43:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.479 09:43:15 env -- scripts/common.sh@344 -- # case "$op" in 00:05:50.479 09:43:15 env -- scripts/common.sh@345 -- # : 1 00:05:50.479 09:43:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.479 09:43:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.479 09:43:15 env -- scripts/common.sh@365 -- # decimal 1 00:05:50.479 09:43:15 env -- scripts/common.sh@353 -- # local d=1 00:05:50.479 09:43:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.479 09:43:15 env -- scripts/common.sh@355 -- # echo 1 00:05:50.479 09:43:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.479 09:43:15 env -- scripts/common.sh@366 -- # decimal 2 00:05:50.479 09:43:15 env -- scripts/common.sh@353 -- # local d=2 00:05:50.479 09:43:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.479 09:43:15 env -- scripts/common.sh@355 -- # echo 2 00:05:50.479 09:43:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.479 09:43:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.479 09:43:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.479 09:43:15 env -- scripts/common.sh@368 -- # return 0 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.479 --rc genhtml_branch_coverage=1 00:05:50.479 --rc genhtml_function_coverage=1 00:05:50.479 --rc genhtml_legend=1 00:05:50.479 --rc geninfo_all_blocks=1 00:05:50.479 --rc geninfo_unexecuted_blocks=1 00:05:50.479 00:05:50.479 ' 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.479 --rc genhtml_branch_coverage=1 00:05:50.479 --rc genhtml_function_coverage=1 00:05:50.479 --rc genhtml_legend=1 00:05:50.479 --rc geninfo_all_blocks=1 00:05:50.479 --rc geninfo_unexecuted_blocks=1 00:05:50.479 00:05:50.479 ' 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.479 --rc genhtml_branch_coverage=1 00:05:50.479 --rc genhtml_function_coverage=1 00:05:50.479 --rc genhtml_legend=1 00:05:50.479 --rc geninfo_all_blocks=1 00:05:50.479 --rc geninfo_unexecuted_blocks=1 00:05:50.479 00:05:50.479 ' 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.479 --rc genhtml_branch_coverage=1 00:05:50.479 --rc genhtml_function_coverage=1 00:05:50.479 --rc genhtml_legend=1 00:05:50.479 --rc geninfo_all_blocks=1 00:05:50.479 --rc geninfo_unexecuted_blocks=1 00:05:50.479 00:05:50.479 ' 00:05:50.479 09:43:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.479 09:43:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.479 09:43:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.479 ************************************ 00:05:50.479 START TEST env_memory 00:05:50.479 ************************************ 00:05:50.479 09:43:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:50.479 00:05:50.479 00:05:50.479 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.479 http://cunit.sourceforge.net/ 00:05:50.479 00:05:50.479 00:05:50.479 Suite: memory 00:05:50.479 Test: alloc and free memory map ...[2024-12-06 09:43:15.735062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:50.738 passed 00:05:50.738 Test: mem map translation ...[2024-12-06 09:43:15.766138] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:50.738 [2024-12-06 09:43:15.766170] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:50.738 [2024-12-06 09:43:15.766225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:50.738 [2024-12-06 09:43:15.766236] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:50.738 passed 00:05:50.738 Test: mem map registration ...[2024-12-06 09:43:15.829878] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:50.738 [2024-12-06 09:43:15.829914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:50.738 passed 00:05:50.738 Test: mem map adjacent registrations ...passed 00:05:50.738 00:05:50.738 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.738 suites 1 1 n/a 0 0 00:05:50.738 tests 4 4 4 0 0 00:05:50.738 asserts 152 152 152 0 n/a 00:05:50.738 00:05:50.738 Elapsed time = 0.213 seconds 00:05:50.738 00:05:50.738 real 0m0.232s 00:05:50.738 user 0m0.218s 00:05:50.738 sys 0m0.011s 00:05:50.738 09:43:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.738 09:43:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:50.738 ************************************ 00:05:50.738 END TEST env_memory 00:05:50.738 ************************************ 00:05:50.738 09:43:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:50.738 09:43:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.738 09:43:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.738 09:43:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.738 ************************************ 00:05:50.738 START TEST env_vtophys 00:05:50.738 ************************************ 00:05:50.738 09:43:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:50.738 EAL: lib.eal log level changed from notice to debug 00:05:50.738 EAL: Detected lcore 0 as core 0 on socket 0 00:05:50.738 EAL: Detected lcore 1 as core 0 on socket 0 00:05:50.738 EAL: Detected lcore 2 as core 0 on socket 0 00:05:50.738 EAL: Detected lcore 3 as core 0 on socket 0 00:05:50.738 EAL: Detected lcore 4 as core 0 on socket 0 00:05:50.738 EAL: Detected lcore 5 as core 0 on socket 0 00:05:50.738 EAL: Detected lcore 6 as core 0 on socket 0 00:05:50.738 EAL: Detected lcore 7 as core 0 on socket 0 00:05:50.739 EAL: Detected lcore 8 as core 0 on socket 0 00:05:50.739 EAL: Detected lcore 9 as core 0 on socket 0 00:05:50.739 EAL: Maximum logical cores by configuration: 128 00:05:50.739 EAL: Detected CPU lcores: 10 00:05:50.739 EAL: Detected NUMA nodes: 1 00:05:50.739 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:50.739 EAL: Detected shared linkage of DPDK 00:05:50.739 EAL: No shared files mode enabled, IPC will be disabled 00:05:50.739 EAL: Selected IOVA mode 'PA' 00:05:50.739 EAL: Probing VFIO support... 00:05:50.739 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:50.739 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:50.739 EAL: Ask a virtual area of 0x2e000 bytes 00:05:50.739 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:50.998 EAL: Setting up physically contiguous memory... 00:05:50.998 EAL: Setting maximum number of open files to 524288 00:05:50.998 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:50.998 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:50.998 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.998 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:50.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.998 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.998 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:50.998 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:50.998 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.998 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:50.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.998 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.998 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:50.998 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:50.998 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.998 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:50.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.998 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.998 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:50.998 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:50.998 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.998 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:50.998 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.998 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.998 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:50.998 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:50.998 EAL: Hugepages will be freed exactly as allocated. 00:05:50.998 EAL: No shared files mode enabled, IPC is disabled 00:05:50.998 EAL: No shared files mode enabled, IPC is disabled 00:05:50.998 EAL: TSC frequency is ~2200000 KHz 00:05:50.999 EAL: Main lcore 0 is ready (tid=7f6b40bf2a00;cpuset=[0]) 00:05:50.999 EAL: Trying to obtain current memory policy. 00:05:50.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.999 EAL: Restoring previous memory policy: 0 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was expanded by 2MB 00:05:50.999 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:50.999 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:50.999 EAL: Mem event callback 'spdk:(nil)' registered 00:05:50.999 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:50.999 00:05:50.999 00:05:50.999 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.999 http://cunit.sourceforge.net/ 00:05:50.999 00:05:50.999 00:05:50.999 Suite: components_suite 00:05:50.999 Test: vtophys_malloc_test ...passed 00:05:50.999 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:50.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.999 EAL: Restoring previous memory policy: 4 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was expanded by 4MB 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was shrunk by 4MB 00:05:50.999 EAL: Trying to obtain current memory policy. 00:05:50.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.999 EAL: Restoring previous memory policy: 4 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was expanded by 6MB 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was shrunk by 6MB 00:05:50.999 EAL: Trying to obtain current memory policy. 00:05:50.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.999 EAL: Restoring previous memory policy: 4 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was expanded by 10MB 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was shrunk by 10MB 00:05:50.999 EAL: Trying to obtain current memory policy. 00:05:50.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.999 EAL: Restoring previous memory policy: 4 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was expanded by 18MB 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was shrunk by 18MB 00:05:50.999 EAL: Trying to obtain current memory policy. 00:05:50.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.999 EAL: Restoring previous memory policy: 4 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was expanded by 34MB 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was shrunk by 34MB 00:05:50.999 EAL: Trying to obtain current memory policy. 00:05:50.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.999 EAL: Restoring previous memory policy: 4 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.999 EAL: Trying to obtain current memory policy. 00:05:50.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.999 EAL: Restoring previous memory policy: 4 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.999 EAL: request: mp_malloc_sync 00:05:50.999 EAL: No shared files mode enabled, IPC is disabled 00:05:50.999 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.258 EAL: request: mp_malloc_sync 00:05:51.258 EAL: No shared files mode enabled, IPC is disabled 00:05:51.258 EAL: Heap on socket 0 was shrunk by 130MB 00:05:51.258 EAL: Trying to obtain current memory policy. 00:05:51.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.258 EAL: Restoring previous memory policy: 4 00:05:51.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.258 EAL: request: mp_malloc_sync 00:05:51.258 EAL: No shared files mode enabled, IPC is disabled 00:05:51.258 EAL: Heap on socket 0 was expanded by 258MB 00:05:51.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.258 EAL: request: mp_malloc_sync 00:05:51.258 EAL: No shared files mode enabled, IPC is disabled 00:05:51.258 EAL: Heap on socket 0 was shrunk by 258MB 00:05:51.258 EAL: Trying to obtain current memory policy. 00:05:51.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.517 EAL: Restoring previous memory policy: 4 00:05:51.517 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.517 EAL: request: mp_malloc_sync 00:05:51.517 EAL: No shared files mode enabled, IPC is disabled 00:05:51.517 EAL: Heap on socket 0 was expanded by 514MB 00:05:51.517 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.517 EAL: request: mp_malloc_sync 00:05:51.517 EAL: No shared files mode enabled, IPC is disabled 00:05:51.517 EAL: Heap on socket 0 was shrunk by 514MB 00:05:51.517 EAL: Trying to obtain current memory policy. 00:05:51.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.811 EAL: Restoring previous memory policy: 4 00:05:51.811 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.811 EAL: request: mp_malloc_sync 00:05:51.811 EAL: No shared files mode enabled, IPC is disabled 00:05:51.811 EAL: Heap on socket 0 was expanded by 1026MB 00:05:52.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.330 passed 00:05:52.330 00:05:52.330 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.330 suites 1 1 n/a 0 0 00:05:52.330 tests 2 2 2 0 0 00:05:52.330 asserts 5421 5421 5421 0 n/a 00:05:52.330 00:05:52.330 Elapsed time = 1.236 seconds 00:05:52.330 EAL: request: mp_malloc_sync 00:05:52.330 EAL: No shared files mode enabled, IPC is disabled 00:05:52.330 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:52.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.330 EAL: request: mp_malloc_sync 00:05:52.330 EAL: No shared files mode enabled, IPC is disabled 00:05:52.330 EAL: Heap on socket 0 was shrunk by 2MB 00:05:52.330 EAL: No shared files mode enabled, IPC is disabled 00:05:52.330 EAL: No shared files mode enabled, IPC is disabled 00:05:52.330 EAL: No shared files mode enabled, IPC is disabled 00:05:52.330 00:05:52.330 real 0m1.444s 00:05:52.330 user 0m0.812s 00:05:52.330 sys 0m0.500s 00:05:52.330 09:43:17 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.330 ************************************ 00:05:52.330 END TEST env_vtophys 00:05:52.330 ************************************ 00:05:52.330 09:43:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 09:43:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:52.330 09:43:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.330 09:43:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.330 09:43:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 ************************************ 00:05:52.330 START TEST env_pci 00:05:52.330 ************************************ 00:05:52.330 09:43:17 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:52.330 00:05:52.330 00:05:52.330 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.330 http://cunit.sourceforge.net/ 00:05:52.330 00:05:52.330 00:05:52.330 Suite: pci 00:05:52.330 Test: pci_hook ...[2024-12-06 09:43:17.480604] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56555 has claimed it 00:05:52.330 passed 00:05:52.330 00:05:52.330 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.330 suites 1 1 n/a 0 0 00:05:52.330 tests 1 1 1 0 0 00:05:52.330 asserts 25 25 25 0 n/a 00:05:52.330 00:05:52.330 Elapsed time = 0.002 seconds 00:05:52.330 EAL: Cannot find device (10000:00:01.0) 00:05:52.330 EAL: Failed to attach device on primary process 00:05:52.330 00:05:52.330 real 0m0.018s 00:05:52.330 user 0m0.010s 00:05:52.330 sys 0m0.007s 00:05:52.330 09:43:17 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.330 ************************************ 00:05:52.330 END TEST env_pci 00:05:52.330 09:43:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 ************************************ 00:05:52.330 09:43:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.330 09:43:17 env -- env/env.sh@15 -- # uname 00:05:52.330 09:43:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.330 09:43:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.330 09:43:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.330 09:43:17 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:52.330 09:43:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.330 09:43:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.330 ************************************ 00:05:52.330 START TEST env_dpdk_post_init 00:05:52.330 ************************************ 00:05:52.330 09:43:17 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.330 EAL: Detected CPU lcores: 10 00:05:52.330 EAL: Detected NUMA nodes: 1 00:05:52.330 EAL: Detected shared linkage of DPDK 00:05:52.330 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.330 EAL: Selected IOVA mode 'PA' 00:05:52.589 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.589 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:52.589 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:52.589 Starting DPDK initialization... 00:05:52.589 Starting SPDK post initialization... 00:05:52.589 SPDK NVMe probe 00:05:52.589 Attaching to 0000:00:10.0 00:05:52.589 Attaching to 0000:00:11.0 00:05:52.589 Attached to 0000:00:10.0 00:05:52.589 Attached to 0000:00:11.0 00:05:52.589 Cleaning up... 00:05:52.589 00:05:52.589 real 0m0.186s 00:05:52.589 user 0m0.049s 00:05:52.589 sys 0m0.038s 00:05:52.589 09:43:17 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.589 09:43:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.589 ************************************ 00:05:52.589 END TEST env_dpdk_post_init 00:05:52.589 ************************************ 00:05:52.589 09:43:17 env -- env/env.sh@26 -- # uname 00:05:52.589 09:43:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.589 09:43:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.589 09:43:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.589 09:43:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.589 09:43:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.589 ************************************ 00:05:52.589 START TEST env_mem_callbacks 00:05:52.589 ************************************ 00:05:52.589 09:43:17 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.589 EAL: Detected CPU lcores: 10 00:05:52.589 EAL: Detected NUMA nodes: 1 00:05:52.589 EAL: Detected shared linkage of DPDK 00:05:52.589 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.589 EAL: Selected IOVA mode 'PA' 00:05:52.848 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.848 00:05:52.848 00:05:52.848 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.848 http://cunit.sourceforge.net/ 00:05:52.848 00:05:52.848 00:05:52.848 Suite: memory 00:05:52.848 Test: test ... 00:05:52.848 register 0x200000200000 2097152 00:05:52.848 malloc 3145728 00:05:52.848 register 0x200000400000 4194304 00:05:52.848 buf 0x200000500000 len 3145728 PASSED 00:05:52.848 malloc 64 00:05:52.848 buf 0x2000004fff40 len 64 PASSED 00:05:52.848 malloc 4194304 00:05:52.848 register 0x200000800000 6291456 00:05:52.848 buf 0x200000a00000 len 4194304 PASSED 00:05:52.848 free 0x200000500000 3145728 00:05:52.848 free 0x2000004fff40 64 00:05:52.848 unregister 0x200000400000 4194304 PASSED 00:05:52.848 free 0x200000a00000 4194304 00:05:52.848 unregister 0x200000800000 6291456 PASSED 00:05:52.848 malloc 8388608 00:05:52.848 register 0x200000400000 10485760 00:05:52.848 buf 0x200000600000 len 8388608 PASSED 00:05:52.848 free 0x200000600000 8388608 00:05:52.848 unregister 0x200000400000 10485760 PASSED 00:05:52.848 passed 00:05:52.848 00:05:52.848 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.848 suites 1 1 n/a 0 0 00:05:52.848 tests 1 1 1 0 0 00:05:52.848 asserts 15 15 15 0 n/a 00:05:52.848 00:05:52.848 Elapsed time = 0.009 seconds 00:05:52.848 00:05:52.848 real 0m0.146s 00:05:52.848 user 0m0.020s 00:05:52.848 sys 0m0.025s 00:05:52.848 09:43:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.848 09:43:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:52.848 ************************************ 00:05:52.848 END TEST env_mem_callbacks 00:05:52.848 ************************************ 00:05:52.848 00:05:52.848 real 0m2.497s 00:05:52.848 user 0m1.317s 00:05:52.848 sys 0m0.836s 00:05:52.848 09:43:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.848 09:43:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.848 ************************************ 00:05:52.848 END TEST env 00:05:52.848 ************************************ 00:05:52.848 09:43:18 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.848 09:43:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.848 09:43:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.848 09:43:18 -- common/autotest_common.sh@10 -- # set +x 00:05:52.848 ************************************ 00:05:52.848 START TEST rpc 00:05:52.848 ************************************ 00:05:52.848 09:43:18 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.848 * Looking for test storage... 00:05:52.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.848 09:43:18 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.848 09:43:18 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.848 09:43:18 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.107 09:43:18 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.107 09:43:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.107 09:43:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.107 09:43:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.107 09:43:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.107 09:43:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.107 09:43:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.107 09:43:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.107 09:43:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.107 09:43:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.107 09:43:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.107 09:43:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.107 09:43:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.107 09:43:18 rpc -- scripts/common.sh@345 -- # : 1 00:05:53.107 09:43:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.107 09:43:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.107 09:43:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.107 09:43:18 rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.107 09:43:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.107 09:43:18 rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.107 09:43:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.108 09:43:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.108 09:43:18 rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.108 09:43:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.108 09:43:18 rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.108 09:43:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.108 09:43:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.108 09:43:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.108 09:43:18 rpc -- scripts/common.sh@368 -- # return 0 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.108 --rc genhtml_branch_coverage=1 00:05:53.108 --rc genhtml_function_coverage=1 00:05:53.108 --rc genhtml_legend=1 00:05:53.108 --rc geninfo_all_blocks=1 00:05:53.108 --rc geninfo_unexecuted_blocks=1 00:05:53.108 00:05:53.108 ' 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.108 --rc genhtml_branch_coverage=1 00:05:53.108 --rc genhtml_function_coverage=1 00:05:53.108 --rc genhtml_legend=1 00:05:53.108 --rc geninfo_all_blocks=1 00:05:53.108 --rc geninfo_unexecuted_blocks=1 00:05:53.108 00:05:53.108 ' 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.108 --rc genhtml_branch_coverage=1 00:05:53.108 --rc genhtml_function_coverage=1 00:05:53.108 --rc genhtml_legend=1 00:05:53.108 --rc geninfo_all_blocks=1 00:05:53.108 --rc geninfo_unexecuted_blocks=1 00:05:53.108 00:05:53.108 ' 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.108 --rc genhtml_branch_coverage=1 00:05:53.108 --rc genhtml_function_coverage=1 00:05:53.108 --rc genhtml_legend=1 00:05:53.108 --rc geninfo_all_blocks=1 00:05:53.108 --rc geninfo_unexecuted_blocks=1 00:05:53.108 00:05:53.108 ' 00:05:53.108 09:43:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56672 00:05:53.108 09:43:18 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:53.108 09:43:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.108 09:43:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56672 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@835 -- # '[' -z 56672 ']' 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.108 09:43:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.108 [2024-12-06 09:43:18.291703] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:53.108 [2024-12-06 09:43:18.291832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56672 ] 00:05:53.366 [2024-12-06 09:43:18.440615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.366 [2024-12-06 09:43:18.499759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.366 [2024-12-06 09:43:18.499835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56672' to capture a snapshot of events at runtime. 00:05:53.366 [2024-12-06 09:43:18.499853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.366 [2024-12-06 09:43:18.499864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.366 [2024-12-06 09:43:18.499873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56672 for offline analysis/debug. 00:05:53.366 [2024-12-06 09:43:18.500398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.366 [2024-12-06 09:43:18.579144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.626 09:43:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.626 09:43:18 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.626 09:43:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.626 09:43:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.626 09:43:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:53.626 09:43:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:53.626 09:43:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.626 09:43:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.626 09:43:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.626 ************************************ 00:05:53.626 START TEST rpc_integrity 00:05:53.626 ************************************ 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.626 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:53.626 { 00:05:53.626 "name": "Malloc0", 00:05:53.626 "aliases": [ 00:05:53.626 "d91b48e4-4ef2-4019-b983-86e94d5916a5" 00:05:53.626 ], 00:05:53.626 "product_name": "Malloc disk", 00:05:53.626 "block_size": 512, 00:05:53.626 "num_blocks": 16384, 00:05:53.626 "uuid": "d91b48e4-4ef2-4019-b983-86e94d5916a5", 00:05:53.626 "assigned_rate_limits": { 00:05:53.626 "rw_ios_per_sec": 0, 00:05:53.626 "rw_mbytes_per_sec": 0, 00:05:53.626 "r_mbytes_per_sec": 0, 00:05:53.626 "w_mbytes_per_sec": 0 00:05:53.626 }, 00:05:53.626 "claimed": false, 00:05:53.626 "zoned": false, 00:05:53.626 "supported_io_types": { 00:05:53.626 "read": true, 00:05:53.626 "write": true, 00:05:53.626 "unmap": true, 00:05:53.626 "flush": true, 00:05:53.626 "reset": true, 00:05:53.626 "nvme_admin": false, 00:05:53.626 "nvme_io": false, 00:05:53.626 "nvme_io_md": false, 00:05:53.626 "write_zeroes": true, 00:05:53.626 "zcopy": true, 00:05:53.626 "get_zone_info": false, 00:05:53.626 "zone_management": false, 00:05:53.626 "zone_append": false, 00:05:53.626 "compare": false, 00:05:53.626 "compare_and_write": false, 00:05:53.626 "abort": true, 00:05:53.626 "seek_hole": false, 00:05:53.626 "seek_data": false, 00:05:53.626 "copy": true, 00:05:53.626 "nvme_iov_md": false 00:05:53.626 }, 00:05:53.626 "memory_domains": [ 00:05:53.626 { 00:05:53.626 "dma_device_id": "system", 00:05:53.626 "dma_device_type": 1 00:05:53.626 }, 00:05:53.626 { 00:05:53.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.626 "dma_device_type": 2 00:05:53.626 } 00:05:53.626 ], 00:05:53.626 "driver_specific": {} 00:05:53.626 } 00:05:53.626 ]' 00:05:53.626 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:53.886 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:53.886 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:53.886 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.886 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.886 [2024-12-06 09:43:18.950835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:53.886 [2024-12-06 09:43:18.950898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:53.886 [2024-12-06 09:43:18.950917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23eecb0 00:05:53.886 [2024-12-06 09:43:18.950926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:53.886 [2024-12-06 09:43:18.952393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:53.886 [2024-12-06 09:43:18.952443] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:53.886 Passthru0 00:05:53.886 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.886 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:53.886 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.886 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.886 09:43:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.886 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:53.886 { 00:05:53.886 "name": "Malloc0", 00:05:53.886 "aliases": [ 00:05:53.886 "d91b48e4-4ef2-4019-b983-86e94d5916a5" 00:05:53.886 ], 00:05:53.886 "product_name": "Malloc disk", 00:05:53.886 "block_size": 512, 00:05:53.886 "num_blocks": 16384, 00:05:53.886 "uuid": "d91b48e4-4ef2-4019-b983-86e94d5916a5", 00:05:53.886 "assigned_rate_limits": { 00:05:53.886 "rw_ios_per_sec": 0, 00:05:53.886 "rw_mbytes_per_sec": 0, 00:05:53.886 "r_mbytes_per_sec": 0, 00:05:53.886 "w_mbytes_per_sec": 0 00:05:53.886 }, 00:05:53.886 "claimed": true, 00:05:53.886 "claim_type": "exclusive_write", 00:05:53.886 "zoned": false, 00:05:53.886 "supported_io_types": { 00:05:53.886 "read": true, 00:05:53.886 "write": true, 00:05:53.886 "unmap": true, 00:05:53.886 "flush": true, 00:05:53.886 "reset": true, 00:05:53.886 "nvme_admin": false, 00:05:53.886 "nvme_io": false, 00:05:53.886 "nvme_io_md": false, 00:05:53.886 "write_zeroes": true, 00:05:53.886 "zcopy": true, 00:05:53.886 "get_zone_info": false, 00:05:53.886 "zone_management": false, 00:05:53.886 "zone_append": false, 00:05:53.886 "compare": false, 00:05:53.886 "compare_and_write": false, 00:05:53.886 "abort": true, 00:05:53.886 "seek_hole": false, 00:05:53.886 "seek_data": false, 00:05:53.886 "copy": true, 00:05:53.886 "nvme_iov_md": false 00:05:53.886 }, 00:05:53.886 "memory_domains": [ 00:05:53.886 { 00:05:53.886 "dma_device_id": "system", 00:05:53.886 "dma_device_type": 1 00:05:53.886 }, 00:05:53.886 { 00:05:53.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.886 "dma_device_type": 2 00:05:53.886 } 00:05:53.886 ], 00:05:53.886 "driver_specific": {} 00:05:53.886 }, 00:05:53.886 { 00:05:53.886 "name": "Passthru0", 00:05:53.886 "aliases": [ 00:05:53.886 "9057f7b1-c30e-569b-9e41-ca0ee6c393c5" 00:05:53.886 ], 00:05:53.886 "product_name": "passthru", 00:05:53.886 "block_size": 512, 00:05:53.886 "num_blocks": 16384, 00:05:53.886 "uuid": "9057f7b1-c30e-569b-9e41-ca0ee6c393c5", 00:05:53.886 "assigned_rate_limits": { 00:05:53.886 "rw_ios_per_sec": 0, 00:05:53.886 "rw_mbytes_per_sec": 0, 00:05:53.886 "r_mbytes_per_sec": 0, 00:05:53.886 "w_mbytes_per_sec": 0 00:05:53.886 }, 00:05:53.886 "claimed": false, 00:05:53.886 "zoned": false, 00:05:53.886 "supported_io_types": { 00:05:53.886 "read": true, 00:05:53.886 "write": true, 00:05:53.886 "unmap": true, 00:05:53.886 "flush": true, 00:05:53.886 "reset": true, 00:05:53.886 "nvme_admin": false, 00:05:53.886 "nvme_io": false, 00:05:53.886 "nvme_io_md": false, 00:05:53.886 "write_zeroes": true, 00:05:53.886 "zcopy": true, 00:05:53.886 "get_zone_info": false, 00:05:53.886 "zone_management": false, 00:05:53.886 "zone_append": false, 00:05:53.886 "compare": false, 00:05:53.886 "compare_and_write": false, 00:05:53.886 "abort": true, 00:05:53.886 "seek_hole": false, 00:05:53.886 "seek_data": false, 00:05:53.886 "copy": true, 00:05:53.886 "nvme_iov_md": false 00:05:53.886 }, 00:05:53.886 "memory_domains": [ 00:05:53.886 { 00:05:53.886 "dma_device_id": "system", 00:05:53.886 "dma_device_type": 1 00:05:53.886 }, 00:05:53.886 { 00:05:53.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.886 "dma_device_type": 2 00:05:53.886 } 00:05:53.886 ], 00:05:53.886 "driver_specific": { 00:05:53.886 "passthru": { 00:05:53.886 "name": "Passthru0", 00:05:53.886 "base_bdev_name": "Malloc0" 00:05:53.886 } 00:05:53.886 } 00:05:53.886 } 00:05:53.886 ]' 00:05:53.886 09:43:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:53.887 09:43:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:53.887 09:43:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.887 09:43:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.887 09:43:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.887 09:43:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:53.887 09:43:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:53.887 09:43:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:53.887 00:05:53.887 real 0m0.324s 00:05:53.887 user 0m0.217s 00:05:53.887 sys 0m0.042s 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.887 ************************************ 00:05:53.887 09:43:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.887 END TEST rpc_integrity 00:05:53.887 ************************************ 00:05:54.147 09:43:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:54.147 09:43:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.147 09:43:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.147 09:43:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 ************************************ 00:05:54.147 START TEST rpc_plugins 00:05:54.147 ************************************ 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:54.147 { 00:05:54.147 "name": "Malloc1", 00:05:54.147 "aliases": [ 00:05:54.147 "4453420d-5733-44b1-b6b8-00e96795ae3a" 00:05:54.147 ], 00:05:54.147 "product_name": "Malloc disk", 00:05:54.147 "block_size": 4096, 00:05:54.147 "num_blocks": 256, 00:05:54.147 "uuid": "4453420d-5733-44b1-b6b8-00e96795ae3a", 00:05:54.147 "assigned_rate_limits": { 00:05:54.147 "rw_ios_per_sec": 0, 00:05:54.147 "rw_mbytes_per_sec": 0, 00:05:54.147 "r_mbytes_per_sec": 0, 00:05:54.147 "w_mbytes_per_sec": 0 00:05:54.147 }, 00:05:54.147 "claimed": false, 00:05:54.147 "zoned": false, 00:05:54.147 "supported_io_types": { 00:05:54.147 "read": true, 00:05:54.147 "write": true, 00:05:54.147 "unmap": true, 00:05:54.147 "flush": true, 00:05:54.147 "reset": true, 00:05:54.147 "nvme_admin": false, 00:05:54.147 "nvme_io": false, 00:05:54.147 "nvme_io_md": false, 00:05:54.147 "write_zeroes": true, 00:05:54.147 "zcopy": true, 00:05:54.147 "get_zone_info": false, 00:05:54.147 "zone_management": false, 00:05:54.147 "zone_append": false, 00:05:54.147 "compare": false, 00:05:54.147 "compare_and_write": false, 00:05:54.147 "abort": true, 00:05:54.147 "seek_hole": false, 00:05:54.147 "seek_data": false, 00:05:54.147 "copy": true, 00:05:54.147 "nvme_iov_md": false 00:05:54.147 }, 00:05:54.147 "memory_domains": [ 00:05:54.147 { 00:05:54.147 "dma_device_id": "system", 00:05:54.147 "dma_device_type": 1 00:05:54.147 }, 00:05:54.147 { 00:05:54.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.147 "dma_device_type": 2 00:05:54.147 } 00:05:54.147 ], 00:05:54.147 "driver_specific": {} 00:05:54.147 } 00:05:54.147 ]' 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:54.147 09:43:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:54.147 00:05:54.147 real 0m0.165s 00:05:54.147 user 0m0.115s 00:05:54.147 sys 0m0.017s 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.147 09:43:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 ************************************ 00:05:54.147 END TEST rpc_plugins 00:05:54.147 ************************************ 00:05:54.147 09:43:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:54.147 09:43:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.147 09:43:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.147 09:43:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 ************************************ 00:05:54.147 START TEST rpc_trace_cmd_test 00:05:54.147 ************************************ 00:05:54.147 09:43:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:54.147 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:54.147 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:54.147 09:43:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.147 09:43:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 09:43:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.147 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:54.147 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56672", 00:05:54.147 "tpoint_group_mask": "0x8", 00:05:54.147 "iscsi_conn": { 00:05:54.147 "mask": "0x2", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "scsi": { 00:05:54.147 "mask": "0x4", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "bdev": { 00:05:54.147 "mask": "0x8", 00:05:54.147 "tpoint_mask": "0xffffffffffffffff" 00:05:54.147 }, 00:05:54.147 "nvmf_rdma": { 00:05:54.147 "mask": "0x10", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "nvmf_tcp": { 00:05:54.147 "mask": "0x20", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "ftl": { 00:05:54.147 "mask": "0x40", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "blobfs": { 00:05:54.147 "mask": "0x80", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "dsa": { 00:05:54.147 "mask": "0x200", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "thread": { 00:05:54.147 "mask": "0x400", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "nvme_pcie": { 00:05:54.147 "mask": "0x800", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "iaa": { 00:05:54.147 "mask": "0x1000", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "nvme_tcp": { 00:05:54.147 "mask": "0x2000", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "bdev_nvme": { 00:05:54.147 "mask": "0x4000", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "sock": { 00:05:54.147 "mask": "0x8000", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "blob": { 00:05:54.147 "mask": "0x10000", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "bdev_raid": { 00:05:54.147 "mask": "0x20000", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 }, 00:05:54.147 "scheduler": { 00:05:54.147 "mask": "0x40000", 00:05:54.147 "tpoint_mask": "0x0" 00:05:54.147 } 00:05:54.147 }' 00:05:54.147 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:54.407 00:05:54.407 real 0m0.271s 00:05:54.407 user 0m0.235s 00:05:54.407 sys 0m0.026s 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.407 09:43:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.407 ************************************ 00:05:54.407 END TEST rpc_trace_cmd_test 00:05:54.407 ************************************ 00:05:54.666 09:43:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:54.666 09:43:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:54.666 09:43:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:54.666 09:43:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.666 09:43:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.666 09:43:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.666 ************************************ 00:05:54.666 START TEST rpc_daemon_integrity 00:05:54.666 ************************************ 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.666 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.666 { 00:05:54.666 "name": "Malloc2", 00:05:54.666 "aliases": [ 00:05:54.666 "175a4704-d855-4185-aaff-ae356a8ba1b1" 00:05:54.666 ], 00:05:54.666 "product_name": "Malloc disk", 00:05:54.666 "block_size": 512, 00:05:54.666 "num_blocks": 16384, 00:05:54.666 "uuid": "175a4704-d855-4185-aaff-ae356a8ba1b1", 00:05:54.666 "assigned_rate_limits": { 00:05:54.666 "rw_ios_per_sec": 0, 00:05:54.666 "rw_mbytes_per_sec": 0, 00:05:54.666 "r_mbytes_per_sec": 0, 00:05:54.666 "w_mbytes_per_sec": 0 00:05:54.666 }, 00:05:54.666 "claimed": false, 00:05:54.666 "zoned": false, 00:05:54.666 "supported_io_types": { 00:05:54.666 "read": true, 00:05:54.666 "write": true, 00:05:54.666 "unmap": true, 00:05:54.666 "flush": true, 00:05:54.666 "reset": true, 00:05:54.666 "nvme_admin": false, 00:05:54.666 "nvme_io": false, 00:05:54.666 "nvme_io_md": false, 00:05:54.666 "write_zeroes": true, 00:05:54.666 "zcopy": true, 00:05:54.666 "get_zone_info": false, 00:05:54.666 "zone_management": false, 00:05:54.666 "zone_append": false, 00:05:54.666 "compare": false, 00:05:54.666 "compare_and_write": false, 00:05:54.666 "abort": true, 00:05:54.666 "seek_hole": false, 00:05:54.666 "seek_data": false, 00:05:54.666 "copy": true, 00:05:54.666 "nvme_iov_md": false 00:05:54.666 }, 00:05:54.666 "memory_domains": [ 00:05:54.666 { 00:05:54.666 "dma_device_id": "system", 00:05:54.666 "dma_device_type": 1 00:05:54.666 }, 00:05:54.666 { 00:05:54.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.666 "dma_device_type": 2 00:05:54.666 } 00:05:54.666 ], 00:05:54.667 "driver_specific": {} 00:05:54.667 } 00:05:54.667 ]' 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.667 [2024-12-06 09:43:19.864653] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:54.667 [2024-12-06 09:43:19.864744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.667 [2024-12-06 09:43:19.864764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2452270 00:05:54.667 [2024-12-06 09:43:19.864774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.667 [2024-12-06 09:43:19.866224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.667 [2024-12-06 09:43:19.866277] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.667 Passthru0 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.667 { 00:05:54.667 "name": "Malloc2", 00:05:54.667 "aliases": [ 00:05:54.667 "175a4704-d855-4185-aaff-ae356a8ba1b1" 00:05:54.667 ], 00:05:54.667 "product_name": "Malloc disk", 00:05:54.667 "block_size": 512, 00:05:54.667 "num_blocks": 16384, 00:05:54.667 "uuid": "175a4704-d855-4185-aaff-ae356a8ba1b1", 00:05:54.667 "assigned_rate_limits": { 00:05:54.667 "rw_ios_per_sec": 0, 00:05:54.667 "rw_mbytes_per_sec": 0, 00:05:54.667 "r_mbytes_per_sec": 0, 00:05:54.667 "w_mbytes_per_sec": 0 00:05:54.667 }, 00:05:54.667 "claimed": true, 00:05:54.667 "claim_type": "exclusive_write", 00:05:54.667 "zoned": false, 00:05:54.667 "supported_io_types": { 00:05:54.667 "read": true, 00:05:54.667 "write": true, 00:05:54.667 "unmap": true, 00:05:54.667 "flush": true, 00:05:54.667 "reset": true, 00:05:54.667 "nvme_admin": false, 00:05:54.667 "nvme_io": false, 00:05:54.667 "nvme_io_md": false, 00:05:54.667 "write_zeroes": true, 00:05:54.667 "zcopy": true, 00:05:54.667 "get_zone_info": false, 00:05:54.667 "zone_management": false, 00:05:54.667 "zone_append": false, 00:05:54.667 "compare": false, 00:05:54.667 "compare_and_write": false, 00:05:54.667 "abort": true, 00:05:54.667 "seek_hole": false, 00:05:54.667 "seek_data": false, 00:05:54.667 "copy": true, 00:05:54.667 "nvme_iov_md": false 00:05:54.667 }, 00:05:54.667 "memory_domains": [ 00:05:54.667 { 00:05:54.667 "dma_device_id": "system", 00:05:54.667 "dma_device_type": 1 00:05:54.667 }, 00:05:54.667 { 00:05:54.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.667 "dma_device_type": 2 00:05:54.667 } 00:05:54.667 ], 00:05:54.667 "driver_specific": {} 00:05:54.667 }, 00:05:54.667 { 00:05:54.667 "name": "Passthru0", 00:05:54.667 "aliases": [ 00:05:54.667 "515afde5-96b3-5218-a000-c4ac5bdb3f3b" 00:05:54.667 ], 00:05:54.667 "product_name": "passthru", 00:05:54.667 "block_size": 512, 00:05:54.667 "num_blocks": 16384, 00:05:54.667 "uuid": "515afde5-96b3-5218-a000-c4ac5bdb3f3b", 00:05:54.667 "assigned_rate_limits": { 00:05:54.667 "rw_ios_per_sec": 0, 00:05:54.667 "rw_mbytes_per_sec": 0, 00:05:54.667 "r_mbytes_per_sec": 0, 00:05:54.667 "w_mbytes_per_sec": 0 00:05:54.667 }, 00:05:54.667 "claimed": false, 00:05:54.667 "zoned": false, 00:05:54.667 "supported_io_types": { 00:05:54.667 "read": true, 00:05:54.667 "write": true, 00:05:54.667 "unmap": true, 00:05:54.667 "flush": true, 00:05:54.667 "reset": true, 00:05:54.667 "nvme_admin": false, 00:05:54.667 "nvme_io": false, 00:05:54.667 "nvme_io_md": false, 00:05:54.667 "write_zeroes": true, 00:05:54.667 "zcopy": true, 00:05:54.667 "get_zone_info": false, 00:05:54.667 "zone_management": false, 00:05:54.667 "zone_append": false, 00:05:54.667 "compare": false, 00:05:54.667 "compare_and_write": false, 00:05:54.667 "abort": true, 00:05:54.667 "seek_hole": false, 00:05:54.667 "seek_data": false, 00:05:54.667 "copy": true, 00:05:54.667 "nvme_iov_md": false 00:05:54.667 }, 00:05:54.667 "memory_domains": [ 00:05:54.667 { 00:05:54.667 "dma_device_id": "system", 00:05:54.667 "dma_device_type": 1 00:05:54.667 }, 00:05:54.667 { 00:05:54.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.667 "dma_device_type": 2 00:05:54.667 } 00:05:54.667 ], 00:05:54.667 "driver_specific": { 00:05:54.667 "passthru": { 00:05:54.667 "name": "Passthru0", 00:05:54.667 "base_bdev_name": "Malloc2" 00:05:54.667 } 00:05:54.667 } 00:05:54.667 } 00:05:54.667 ]' 00:05:54.667 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.926 09:43:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.926 09:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.926 00:05:54.926 real 0m0.322s 00:05:54.926 user 0m0.213s 00:05:54.926 sys 0m0.045s 00:05:54.926 09:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.927 09:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.927 ************************************ 00:05:54.927 END TEST rpc_daemon_integrity 00:05:54.927 ************************************ 00:05:54.927 09:43:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:54.927 09:43:20 rpc -- rpc/rpc.sh@84 -- # killprocess 56672 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 56672 ']' 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@958 -- # kill -0 56672 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@959 -- # uname 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56672 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.927 killing process with pid 56672 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56672' 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@973 -- # kill 56672 00:05:54.927 09:43:20 rpc -- common/autotest_common.sh@978 -- # wait 56672 00:05:55.574 00:05:55.574 real 0m2.486s 00:05:55.574 user 0m3.113s 00:05:55.574 sys 0m0.710s 00:05:55.574 09:43:20 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.574 ************************************ 00:05:55.574 09:43:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.574 END TEST rpc 00:05:55.574 ************************************ 00:05:55.574 09:43:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:55.574 09:43:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.574 09:43:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.574 09:43:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.574 ************************************ 00:05:55.574 START TEST skip_rpc 00:05:55.574 ************************************ 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:55.574 * Looking for test storage... 00:05:55.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.574 09:43:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:55.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.574 --rc genhtml_branch_coverage=1 00:05:55.574 --rc genhtml_function_coverage=1 00:05:55.574 --rc genhtml_legend=1 00:05:55.574 --rc geninfo_all_blocks=1 00:05:55.574 --rc geninfo_unexecuted_blocks=1 00:05:55.574 00:05:55.574 ' 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:55.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.574 --rc genhtml_branch_coverage=1 00:05:55.574 --rc genhtml_function_coverage=1 00:05:55.574 --rc genhtml_legend=1 00:05:55.574 --rc geninfo_all_blocks=1 00:05:55.574 --rc geninfo_unexecuted_blocks=1 00:05:55.574 00:05:55.574 ' 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:55.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.574 --rc genhtml_branch_coverage=1 00:05:55.574 --rc genhtml_function_coverage=1 00:05:55.574 --rc genhtml_legend=1 00:05:55.574 --rc geninfo_all_blocks=1 00:05:55.574 --rc geninfo_unexecuted_blocks=1 00:05:55.574 00:05:55.574 ' 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:55.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.574 --rc genhtml_branch_coverage=1 00:05:55.574 --rc genhtml_function_coverage=1 00:05:55.574 --rc genhtml_legend=1 00:05:55.574 --rc geninfo_all_blocks=1 00:05:55.574 --rc geninfo_unexecuted_blocks=1 00:05:55.574 00:05:55.574 ' 00:05:55.574 09:43:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.574 09:43:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:55.574 09:43:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.574 09:43:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.574 ************************************ 00:05:55.574 START TEST skip_rpc 00:05:55.574 ************************************ 00:05:55.574 09:43:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:55.574 09:43:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56876 00:05:55.574 09:43:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:55.574 09:43:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.574 09:43:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:55.847 [2024-12-06 09:43:20.852506] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:55.847 [2024-12-06 09:43:20.852678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56876 ] 00:05:55.847 [2024-12-06 09:43:21.005619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.847 [2024-12-06 09:43:21.065486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.106 [2024-12-06 09:43:21.141863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56876 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56876 ']' 00:06:01.380 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56876 00:06:01.381 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:01.381 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.381 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56876 00:06:01.381 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.381 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.381 killing process with pid 56876 00:06:01.381 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56876' 00:06:01.381 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56876 00:06:01.381 09:43:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56876 00:06:01.381 00:06:01.381 real 0m5.426s 00:06:01.381 user 0m5.028s 00:06:01.381 sys 0m0.315s 00:06:01.381 09:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.381 ************************************ 00:06:01.381 09:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.381 END TEST skip_rpc 00:06:01.381 ************************************ 00:06:01.381 09:43:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:01.381 09:43:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.381 09:43:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.381 09:43:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.381 ************************************ 00:06:01.381 START TEST skip_rpc_with_json 00:06:01.381 ************************************ 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56957 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56957 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56957 ']' 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.381 09:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.381 [2024-12-06 09:43:26.319093] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:01.381 [2024-12-06 09:43:26.319208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56957 ] 00:06:01.381 [2024-12-06 09:43:26.472366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.381 [2024-12-06 09:43:26.536629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.381 [2024-12-06 09:43:26.616354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.318 [2024-12-06 09:43:27.337754] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:02.318 request: 00:06:02.318 { 00:06:02.318 "trtype": "tcp", 00:06:02.318 "method": "nvmf_get_transports", 00:06:02.318 "req_id": 1 00:06:02.318 } 00:06:02.318 Got JSON-RPC error response 00:06:02.318 response: 00:06:02.318 { 00:06:02.318 "code": -19, 00:06:02.318 "message": "No such device" 00:06:02.318 } 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.318 [2024-12-06 09:43:27.349866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.318 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.318 { 00:06:02.318 "subsystems": [ 00:06:02.318 { 00:06:02.318 "subsystem": "fsdev", 00:06:02.318 "config": [ 00:06:02.318 { 00:06:02.318 "method": "fsdev_set_opts", 00:06:02.318 "params": { 00:06:02.318 "fsdev_io_pool_size": 65535, 00:06:02.318 "fsdev_io_cache_size": 256 00:06:02.318 } 00:06:02.318 } 00:06:02.318 ] 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "subsystem": "keyring", 00:06:02.318 "config": [] 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "subsystem": "iobuf", 00:06:02.318 "config": [ 00:06:02.318 { 00:06:02.318 "method": "iobuf_set_options", 00:06:02.318 "params": { 00:06:02.318 "small_pool_count": 8192, 00:06:02.318 "large_pool_count": 1024, 00:06:02.318 "small_bufsize": 8192, 00:06:02.318 "large_bufsize": 135168, 00:06:02.318 "enable_numa": false 00:06:02.318 } 00:06:02.318 } 00:06:02.318 ] 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "subsystem": "sock", 00:06:02.318 "config": [ 00:06:02.318 { 00:06:02.318 "method": "sock_set_default_impl", 00:06:02.318 "params": { 00:06:02.318 "impl_name": "uring" 00:06:02.318 } 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "method": "sock_impl_set_options", 00:06:02.318 "params": { 00:06:02.318 "impl_name": "ssl", 00:06:02.318 "recv_buf_size": 4096, 00:06:02.318 "send_buf_size": 4096, 00:06:02.318 "enable_recv_pipe": true, 00:06:02.318 "enable_quickack": false, 00:06:02.318 "enable_placement_id": 0, 00:06:02.318 "enable_zerocopy_send_server": true, 00:06:02.318 "enable_zerocopy_send_client": false, 00:06:02.318 "zerocopy_threshold": 0, 00:06:02.318 "tls_version": 0, 00:06:02.318 "enable_ktls": false 00:06:02.318 } 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "method": "sock_impl_set_options", 00:06:02.318 "params": { 00:06:02.318 "impl_name": "posix", 00:06:02.318 "recv_buf_size": 2097152, 00:06:02.318 "send_buf_size": 2097152, 00:06:02.318 "enable_recv_pipe": true, 00:06:02.318 "enable_quickack": false, 00:06:02.318 "enable_placement_id": 0, 00:06:02.318 "enable_zerocopy_send_server": true, 00:06:02.318 "enable_zerocopy_send_client": false, 00:06:02.318 "zerocopy_threshold": 0, 00:06:02.318 "tls_version": 0, 00:06:02.318 "enable_ktls": false 00:06:02.318 } 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "method": "sock_impl_set_options", 00:06:02.318 "params": { 00:06:02.318 "impl_name": "uring", 00:06:02.318 "recv_buf_size": 2097152, 00:06:02.318 "send_buf_size": 2097152, 00:06:02.318 "enable_recv_pipe": true, 00:06:02.318 "enable_quickack": false, 00:06:02.318 "enable_placement_id": 0, 00:06:02.318 "enable_zerocopy_send_server": false, 00:06:02.318 "enable_zerocopy_send_client": false, 00:06:02.318 "zerocopy_threshold": 0, 00:06:02.318 "tls_version": 0, 00:06:02.318 "enable_ktls": false 00:06:02.318 } 00:06:02.318 } 00:06:02.318 ] 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "subsystem": "vmd", 00:06:02.318 "config": [] 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "subsystem": "accel", 00:06:02.318 "config": [ 00:06:02.318 { 00:06:02.318 "method": "accel_set_options", 00:06:02.318 "params": { 00:06:02.318 "small_cache_size": 128, 00:06:02.318 "large_cache_size": 16, 00:06:02.318 "task_count": 2048, 00:06:02.318 "sequence_count": 2048, 00:06:02.318 "buf_count": 2048 00:06:02.318 } 00:06:02.318 } 00:06:02.318 ] 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "subsystem": "bdev", 00:06:02.318 "config": [ 00:06:02.318 { 00:06:02.318 "method": "bdev_set_options", 00:06:02.318 "params": { 00:06:02.318 "bdev_io_pool_size": 65535, 00:06:02.318 "bdev_io_cache_size": 256, 00:06:02.318 "bdev_auto_examine": true, 00:06:02.318 "iobuf_small_cache_size": 128, 00:06:02.318 "iobuf_large_cache_size": 16 00:06:02.318 } 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "method": "bdev_raid_set_options", 00:06:02.318 "params": { 00:06:02.318 "process_window_size_kb": 1024, 00:06:02.318 "process_max_bandwidth_mb_sec": 0 00:06:02.318 } 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "method": "bdev_iscsi_set_options", 00:06:02.318 "params": { 00:06:02.318 "timeout_sec": 30 00:06:02.318 } 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "method": "bdev_nvme_set_options", 00:06:02.318 "params": { 00:06:02.318 "action_on_timeout": "none", 00:06:02.318 "timeout_us": 0, 00:06:02.318 "timeout_admin_us": 0, 00:06:02.318 "keep_alive_timeout_ms": 10000, 00:06:02.318 "arbitration_burst": 0, 00:06:02.318 "low_priority_weight": 0, 00:06:02.318 "medium_priority_weight": 0, 00:06:02.318 "high_priority_weight": 0, 00:06:02.318 "nvme_adminq_poll_period_us": 10000, 00:06:02.318 "nvme_ioq_poll_period_us": 0, 00:06:02.318 "io_queue_requests": 0, 00:06:02.318 "delay_cmd_submit": true, 00:06:02.318 "transport_retry_count": 4, 00:06:02.318 "bdev_retry_count": 3, 00:06:02.318 "transport_ack_timeout": 0, 00:06:02.318 "ctrlr_loss_timeout_sec": 0, 00:06:02.318 "reconnect_delay_sec": 0, 00:06:02.318 "fast_io_fail_timeout_sec": 0, 00:06:02.318 "disable_auto_failback": false, 00:06:02.318 "generate_uuids": false, 00:06:02.318 "transport_tos": 0, 00:06:02.318 "nvme_error_stat": false, 00:06:02.318 "rdma_srq_size": 0, 00:06:02.318 "io_path_stat": false, 00:06:02.318 "allow_accel_sequence": false, 00:06:02.318 "rdma_max_cq_size": 0, 00:06:02.318 "rdma_cm_event_timeout_ms": 0, 00:06:02.318 "dhchap_digests": [ 00:06:02.318 "sha256", 00:06:02.318 "sha384", 00:06:02.318 "sha512" 00:06:02.318 ], 00:06:02.318 "dhchap_dhgroups": [ 00:06:02.318 "null", 00:06:02.318 "ffdhe2048", 00:06:02.318 "ffdhe3072", 00:06:02.318 "ffdhe4096", 00:06:02.318 "ffdhe6144", 00:06:02.318 "ffdhe8192" 00:06:02.318 ] 00:06:02.318 } 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "method": "bdev_nvme_set_hotplug", 00:06:02.318 "params": { 00:06:02.318 "period_us": 100000, 00:06:02.318 "enable": false 00:06:02.318 } 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "method": "bdev_wait_for_examine" 00:06:02.318 } 00:06:02.318 ] 00:06:02.318 }, 00:06:02.318 { 00:06:02.318 "subsystem": "scsi", 00:06:02.318 "config": null 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "subsystem": "scheduler", 00:06:02.319 "config": [ 00:06:02.319 { 00:06:02.319 "method": "framework_set_scheduler", 00:06:02.319 "params": { 00:06:02.319 "name": "static" 00:06:02.319 } 00:06:02.319 } 00:06:02.319 ] 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "subsystem": "vhost_scsi", 00:06:02.319 "config": [] 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "subsystem": "vhost_blk", 00:06:02.319 "config": [] 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "subsystem": "ublk", 00:06:02.319 "config": [] 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "subsystem": "nbd", 00:06:02.319 "config": [] 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "subsystem": "nvmf", 00:06:02.319 "config": [ 00:06:02.319 { 00:06:02.319 "method": "nvmf_set_config", 00:06:02.319 "params": { 00:06:02.319 "discovery_filter": "match_any", 00:06:02.319 "admin_cmd_passthru": { 00:06:02.319 "identify_ctrlr": false 00:06:02.319 }, 00:06:02.319 "dhchap_digests": [ 00:06:02.319 "sha256", 00:06:02.319 "sha384", 00:06:02.319 "sha512" 00:06:02.319 ], 00:06:02.319 "dhchap_dhgroups": [ 00:06:02.319 "null", 00:06:02.319 "ffdhe2048", 00:06:02.319 "ffdhe3072", 00:06:02.319 "ffdhe4096", 00:06:02.319 "ffdhe6144", 00:06:02.319 "ffdhe8192" 00:06:02.319 ] 00:06:02.319 } 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "method": "nvmf_set_max_subsystems", 00:06:02.319 "params": { 00:06:02.319 "max_subsystems": 1024 00:06:02.319 } 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "method": "nvmf_set_crdt", 00:06:02.319 "params": { 00:06:02.319 "crdt1": 0, 00:06:02.319 "crdt2": 0, 00:06:02.319 "crdt3": 0 00:06:02.319 } 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "method": "nvmf_create_transport", 00:06:02.319 "params": { 00:06:02.319 "trtype": "TCP", 00:06:02.319 "max_queue_depth": 128, 00:06:02.319 "max_io_qpairs_per_ctrlr": 127, 00:06:02.319 "in_capsule_data_size": 4096, 00:06:02.319 "max_io_size": 131072, 00:06:02.319 "io_unit_size": 131072, 00:06:02.319 "max_aq_depth": 128, 00:06:02.319 "num_shared_buffers": 511, 00:06:02.319 "buf_cache_size": 4294967295, 00:06:02.319 "dif_insert_or_strip": false, 00:06:02.319 "zcopy": false, 00:06:02.319 "c2h_success": true, 00:06:02.319 "sock_priority": 0, 00:06:02.319 "abort_timeout_sec": 1, 00:06:02.319 "ack_timeout": 0, 00:06:02.319 "data_wr_pool_size": 0 00:06:02.319 } 00:06:02.319 } 00:06:02.319 ] 00:06:02.319 }, 00:06:02.319 { 00:06:02.319 "subsystem": "iscsi", 00:06:02.319 "config": [ 00:06:02.319 { 00:06:02.319 "method": "iscsi_set_options", 00:06:02.319 "params": { 00:06:02.319 "node_base": "iqn.2016-06.io.spdk", 00:06:02.319 "max_sessions": 128, 00:06:02.319 "max_connections_per_session": 2, 00:06:02.319 "max_queue_depth": 64, 00:06:02.319 "default_time2wait": 2, 00:06:02.319 "default_time2retain": 20, 00:06:02.319 "first_burst_length": 8192, 00:06:02.319 "immediate_data": true, 00:06:02.319 "allow_duplicated_isid": false, 00:06:02.319 "error_recovery_level": 0, 00:06:02.319 "nop_timeout": 60, 00:06:02.319 "nop_in_interval": 30, 00:06:02.319 "disable_chap": false, 00:06:02.319 "require_chap": false, 00:06:02.319 "mutual_chap": false, 00:06:02.319 "chap_group": 0, 00:06:02.319 "max_large_datain_per_connection": 64, 00:06:02.319 "max_r2t_per_connection": 4, 00:06:02.319 "pdu_pool_size": 36864, 00:06:02.319 "immediate_data_pool_size": 16384, 00:06:02.319 "data_out_pool_size": 2048 00:06:02.319 } 00:06:02.319 } 00:06:02.319 ] 00:06:02.319 } 00:06:02.319 ] 00:06:02.319 } 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56957 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56957 ']' 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56957 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56957 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.319 killing process with pid 56957 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56957' 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56957 00:06:02.319 09:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56957 00:06:02.887 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56990 00:06:02.887 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.887 09:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56990 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56990 ']' 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56990 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56990 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.158 killing process with pid 56990 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56990' 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56990 00:06:08.158 09:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56990 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.158 00:06:08.158 real 0m7.107s 00:06:08.158 user 0m6.863s 00:06:08.158 sys 0m0.689s 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.158 ************************************ 00:06:08.158 END TEST skip_rpc_with_json 00:06:08.158 ************************************ 00:06:08.158 09:43:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:08.158 09:43:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.158 09:43:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.158 09:43:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.158 ************************************ 00:06:08.158 START TEST skip_rpc_with_delay 00:06:08.158 ************************************ 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:08.158 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.418 [2024-12-06 09:43:33.480464] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:08.418 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:08.418 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.418 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.418 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.418 00:06:08.418 real 0m0.091s 00:06:08.418 user 0m0.054s 00:06:08.418 sys 0m0.035s 00:06:08.418 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.418 ************************************ 00:06:08.418 END TEST skip_rpc_with_delay 00:06:08.418 ************************************ 00:06:08.418 09:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:08.418 09:43:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:08.418 09:43:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:08.418 09:43:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:08.418 09:43:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.418 09:43:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.418 09:43:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.418 ************************************ 00:06:08.418 START TEST exit_on_failed_rpc_init 00:06:08.418 ************************************ 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57094 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57094 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57094 ']' 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.418 09:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.418 [2024-12-06 09:43:33.633801] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:08.418 [2024-12-06 09:43:33.634178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57094 ] 00:06:08.676 [2024-12-06 09:43:33.776771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.676 [2024-12-06 09:43:33.831853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.676 [2024-12-06 09:43:33.903579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:09.610 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.610 [2024-12-06 09:43:34.662176] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:09.610 [2024-12-06 09:43:34.662282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57112 ] 00:06:09.610 [2024-12-06 09:43:34.814590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.869 [2024-12-06 09:43:34.882685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.869 [2024-12-06 09:43:34.882820] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:09.869 [2024-12-06 09:43:34.882839] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:09.869 [2024-12-06 09:43:34.882850] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57094 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57094 ']' 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57094 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57094 00:06:09.869 killing process with pid 57094 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57094' 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57094 00:06:09.869 09:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57094 00:06:10.127 00:06:10.127 real 0m1.799s 00:06:10.127 user 0m2.059s 00:06:10.127 sys 0m0.432s 00:06:10.127 09:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.127 09:43:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.127 ************************************ 00:06:10.127 END TEST exit_on_failed_rpc_init 00:06:10.127 ************************************ 00:06:10.386 09:43:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.386 00:06:10.386 real 0m14.842s 00:06:10.386 user 0m14.188s 00:06:10.386 sys 0m1.697s 00:06:10.386 ************************************ 00:06:10.386 END TEST skip_rpc 00:06:10.386 ************************************ 00:06:10.386 09:43:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.386 09:43:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.386 09:43:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.386 09:43:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.386 09:43:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.386 09:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:10.386 ************************************ 00:06:10.386 START TEST rpc_client 00:06:10.386 ************************************ 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.386 * Looking for test storage... 00:06:10.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.386 09:43:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.386 --rc genhtml_branch_coverage=1 00:06:10.386 --rc genhtml_function_coverage=1 00:06:10.386 --rc genhtml_legend=1 00:06:10.386 --rc geninfo_all_blocks=1 00:06:10.386 --rc geninfo_unexecuted_blocks=1 00:06:10.386 00:06:10.386 ' 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.386 --rc genhtml_branch_coverage=1 00:06:10.386 --rc genhtml_function_coverage=1 00:06:10.386 --rc genhtml_legend=1 00:06:10.386 --rc geninfo_all_blocks=1 00:06:10.386 --rc geninfo_unexecuted_blocks=1 00:06:10.386 00:06:10.386 ' 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.386 --rc genhtml_branch_coverage=1 00:06:10.386 --rc genhtml_function_coverage=1 00:06:10.386 --rc genhtml_legend=1 00:06:10.386 --rc geninfo_all_blocks=1 00:06:10.386 --rc geninfo_unexecuted_blocks=1 00:06:10.386 00:06:10.386 ' 00:06:10.386 09:43:35 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.386 --rc genhtml_branch_coverage=1 00:06:10.386 --rc genhtml_function_coverage=1 00:06:10.386 --rc genhtml_legend=1 00:06:10.386 --rc geninfo_all_blocks=1 00:06:10.386 --rc geninfo_unexecuted_blocks=1 00:06:10.386 00:06:10.386 ' 00:06:10.386 09:43:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:10.646 OK 00:06:10.646 09:43:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.646 00:06:10.646 real 0m0.215s 00:06:10.646 user 0m0.134s 00:06:10.646 sys 0m0.091s 00:06:10.646 ************************************ 00:06:10.646 END TEST rpc_client 00:06:10.646 ************************************ 00:06:10.646 09:43:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.646 09:43:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.646 09:43:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.646 09:43:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.646 09:43:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.646 09:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:10.646 ************************************ 00:06:10.646 START TEST json_config 00:06:10.646 ************************************ 00:06:10.646 09:43:35 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.646 09:43:35 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.646 09:43:35 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.646 09:43:35 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.646 09:43:35 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.646 09:43:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.646 09:43:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.646 09:43:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.646 09:43:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.646 09:43:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.646 09:43:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.646 09:43:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.646 09:43:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.646 09:43:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.646 09:43:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.646 09:43:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.646 09:43:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:10.646 09:43:35 json_config -- scripts/common.sh@345 -- # : 1 00:06:10.646 09:43:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.646 09:43:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.646 09:43:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:10.646 09:43:35 json_config -- scripts/common.sh@353 -- # local d=1 00:06:10.646 09:43:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.646 09:43:35 json_config -- scripts/common.sh@355 -- # echo 1 00:06:10.646 09:43:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.646 09:43:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:10.646 09:43:35 json_config -- scripts/common.sh@353 -- # local d=2 00:06:10.646 09:43:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.646 09:43:35 json_config -- scripts/common.sh@355 -- # echo 2 00:06:10.646 09:43:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.646 09:43:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.646 09:43:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.646 09:43:35 json_config -- scripts/common.sh@368 -- # return 0 00:06:10.646 09:43:35 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.646 09:43:35 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.646 --rc genhtml_branch_coverage=1 00:06:10.646 --rc genhtml_function_coverage=1 00:06:10.646 --rc genhtml_legend=1 00:06:10.646 --rc geninfo_all_blocks=1 00:06:10.646 --rc geninfo_unexecuted_blocks=1 00:06:10.646 00:06:10.646 ' 00:06:10.646 09:43:35 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.646 --rc genhtml_branch_coverage=1 00:06:10.647 --rc genhtml_function_coverage=1 00:06:10.647 --rc genhtml_legend=1 00:06:10.647 --rc geninfo_all_blocks=1 00:06:10.647 --rc geninfo_unexecuted_blocks=1 00:06:10.647 00:06:10.647 ' 00:06:10.647 09:43:35 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.647 --rc genhtml_branch_coverage=1 00:06:10.647 --rc genhtml_function_coverage=1 00:06:10.647 --rc genhtml_legend=1 00:06:10.647 --rc geninfo_all_blocks=1 00:06:10.647 --rc geninfo_unexecuted_blocks=1 00:06:10.647 00:06:10.647 ' 00:06:10.647 09:43:35 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.647 --rc genhtml_branch_coverage=1 00:06:10.647 --rc genhtml_function_coverage=1 00:06:10.647 --rc genhtml_legend=1 00:06:10.647 --rc geninfo_all_blocks=1 00:06:10.647 --rc geninfo_unexecuted_blocks=1 00:06:10.647 00:06:10.647 ' 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.647 09:43:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.647 09:43:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.647 09:43:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.647 09:43:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.647 09:43:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.647 09:43:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.647 09:43:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.647 09:43:35 json_config -- paths/export.sh@5 -- # export PATH 00:06:10.647 09:43:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@51 -- # : 0 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.647 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.647 09:43:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:10.647 INFO: JSON configuration test init 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:10.647 09:43:35 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:10.648 09:43:35 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.648 09:43:35 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.648 Waiting for target to run... 00:06:10.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.648 09:43:35 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:10.648 09:43:35 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.648 09:43:35 json_config -- json_config/common.sh@10 -- # shift 00:06:10.648 09:43:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.648 09:43:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.648 09:43:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.648 09:43:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.648 09:43:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.648 09:43:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57252 00:06:10.648 09:43:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.648 09:43:35 json_config -- json_config/common.sh@25 -- # waitforlisten 57252 /var/tmp/spdk_tgt.sock 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 57252 ']' 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.648 09:43:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:10.648 09:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.907 [2024-12-06 09:43:35.980290] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:10.907 [2024-12-06 09:43:35.980647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57252 ] 00:06:11.166 [2024-12-06 09:43:36.423033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.424 [2024-12-06 09:43:36.462212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.990 09:43:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.990 09:43:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:11.990 09:43:36 json_config -- json_config/common.sh@26 -- # echo '' 00:06:11.990 00:06:11.990 09:43:36 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:11.990 09:43:36 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:11.990 09:43:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.990 09:43:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.990 09:43:36 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:11.990 09:43:36 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:11.990 09:43:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:11.990 09:43:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.990 09:43:37 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:11.990 09:43:37 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:11.990 09:43:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:12.249 [2024-12-06 09:43:37.319892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.249 09:43:37 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:12.249 09:43:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:12.249 09:43:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.249 09:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.249 09:43:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:12.249 09:43:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:12.249 09:43:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:12.249 09:43:37 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:12.249 09:43:37 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:12.508 09:43:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@54 -- # sort 00:06:12.508 09:43:37 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:12.767 09:43:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.767 09:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:12.767 09:43:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.767 09:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:12.767 09:43:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:12.767 09:43:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:13.026 MallocForNvmf0 00:06:13.026 09:43:38 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.026 09:43:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.285 MallocForNvmf1 00:06:13.285 09:43:38 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.285 09:43:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.544 [2024-12-06 09:43:38.620970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.544 09:43:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:13.544 09:43:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:13.804 09:43:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:13.804 09:43:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:14.064 09:43:39 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.064 09:43:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.322 09:43:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.322 09:43:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.581 [2024-12-06 09:43:39.657563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:14.581 09:43:39 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:14.581 09:43:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.581 09:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.581 09:43:39 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:14.581 09:43:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.581 09:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.581 09:43:39 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:14.581 09:43:39 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.581 09:43:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.840 MallocBdevForConfigChangeCheck 00:06:14.840 09:43:40 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:14.840 09:43:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.840 09:43:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.840 09:43:40 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:14.840 09:43:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.408 INFO: shutting down applications... 00:06:15.408 09:43:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:15.408 09:43:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:15.408 09:43:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:15.408 09:43:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:15.408 09:43:40 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:15.667 Calling clear_iscsi_subsystem 00:06:15.667 Calling clear_nvmf_subsystem 00:06:15.667 Calling clear_nbd_subsystem 00:06:15.667 Calling clear_ublk_subsystem 00:06:15.667 Calling clear_vhost_blk_subsystem 00:06:15.667 Calling clear_vhost_scsi_subsystem 00:06:15.667 Calling clear_bdev_subsystem 00:06:15.667 09:43:40 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:15.667 09:43:40 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:15.667 09:43:40 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:15.667 09:43:40 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:15.667 09:43:40 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.667 09:43:40 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:16.234 09:43:41 json_config -- json_config/json_config.sh@352 -- # break 00:06:16.234 09:43:41 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:16.234 09:43:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:16.234 09:43:41 json_config -- json_config/common.sh@31 -- # local app=target 00:06:16.234 09:43:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:16.234 09:43:41 json_config -- json_config/common.sh@35 -- # [[ -n 57252 ]] 00:06:16.234 09:43:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57252 00:06:16.234 09:43:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:16.234 09:43:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.234 09:43:41 json_config -- json_config/common.sh@41 -- # kill -0 57252 00:06:16.234 09:43:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.803 09:43:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.803 09:43:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.803 09:43:41 json_config -- json_config/common.sh@41 -- # kill -0 57252 00:06:16.803 SPDK target shutdown done 00:06:16.803 INFO: relaunching applications... 00:06:16.803 09:43:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:16.804 09:43:41 json_config -- json_config/common.sh@43 -- # break 00:06:16.804 09:43:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:16.804 09:43:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:16.804 09:43:41 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:16.804 09:43:41 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:16.804 09:43:41 json_config -- json_config/common.sh@9 -- # local app=target 00:06:16.804 09:43:41 json_config -- json_config/common.sh@10 -- # shift 00:06:16.804 09:43:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.804 09:43:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.804 09:43:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.804 09:43:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.804 09:43:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.804 09:43:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57448 00:06:16.804 09:43:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:16.804 09:43:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.804 Waiting for target to run... 00:06:16.804 09:43:41 json_config -- json_config/common.sh@25 -- # waitforlisten 57448 /var/tmp/spdk_tgt.sock 00:06:16.804 09:43:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 57448 ']' 00:06:16.804 09:43:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.804 09:43:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.804 09:43:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.804 09:43:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.804 09:43:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 [2024-12-06 09:43:41.860677] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:16.804 [2024-12-06 09:43:41.860779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57448 ] 00:06:17.063 [2024-12-06 09:43:42.297193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.321 [2024-12-06 09:43:42.335058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.321 [2024-12-06 09:43:42.470593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.580 [2024-12-06 09:43:42.679889] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.580 [2024-12-06 09:43:42.711963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:17.580 00:06:17.580 INFO: Checking if target configuration is the same... 00:06:17.580 09:43:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.580 09:43:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:17.580 09:43:42 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.580 09:43:42 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:17.580 09:43:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:17.580 09:43:42 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.580 09:43:42 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:17.581 09:43:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.581 + '[' 2 -ne 2 ']' 00:06:17.581 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:17.581 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:17.581 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:17.581 +++ basename /dev/fd/62 00:06:17.581 ++ mktemp /tmp/62.XXX 00:06:17.581 + tmp_file_1=/tmp/62.6zl 00:06:17.840 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.840 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.840 + tmp_file_2=/tmp/spdk_tgt_config.json.EhN 00:06:17.840 + ret=0 00:06:17.840 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:18.099 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:18.099 + diff -u /tmp/62.6zl /tmp/spdk_tgt_config.json.EhN 00:06:18.099 INFO: JSON config files are the same 00:06:18.099 + echo 'INFO: JSON config files are the same' 00:06:18.099 + rm /tmp/62.6zl /tmp/spdk_tgt_config.json.EhN 00:06:18.099 + exit 0 00:06:18.099 INFO: changing configuration and checking if this can be detected... 00:06:18.099 09:43:43 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:18.099 09:43:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:18.099 09:43:43 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.099 09:43:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.358 09:43:43 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:18.358 09:43:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:18.358 09:43:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.358 + '[' 2 -ne 2 ']' 00:06:18.358 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:18.358 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:18.358 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:18.358 +++ basename /dev/fd/62 00:06:18.358 ++ mktemp /tmp/62.XXX 00:06:18.358 + tmp_file_1=/tmp/62.E5L 00:06:18.358 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:18.358 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:18.358 + tmp_file_2=/tmp/spdk_tgt_config.json.Erp 00:06:18.358 + ret=0 00:06:18.358 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:18.926 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:18.926 + diff -u /tmp/62.E5L /tmp/spdk_tgt_config.json.Erp 00:06:18.926 + ret=1 00:06:18.926 + echo '=== Start of file: /tmp/62.E5L ===' 00:06:18.926 + cat /tmp/62.E5L 00:06:18.926 + echo '=== End of file: /tmp/62.E5L ===' 00:06:18.926 + echo '' 00:06:18.926 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Erp ===' 00:06:18.926 + cat /tmp/spdk_tgt_config.json.Erp 00:06:18.926 + echo '=== End of file: /tmp/spdk_tgt_config.json.Erp ===' 00:06:18.926 + echo '' 00:06:18.926 + rm /tmp/62.E5L /tmp/spdk_tgt_config.json.Erp 00:06:18.926 + exit 1 00:06:18.926 INFO: configuration change detected. 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@324 -- # [[ -n 57448 ]] 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.926 09:43:44 json_config -- json_config/json_config.sh@330 -- # killprocess 57448 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@954 -- # '[' -z 57448 ']' 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@958 -- # kill -0 57448 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@959 -- # uname 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57448 00:06:18.926 killing process with pid 57448 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57448' 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@973 -- # kill 57448 00:06:18.926 09:43:44 json_config -- common/autotest_common.sh@978 -- # wait 57448 00:06:19.186 09:43:44 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:19.186 09:43:44 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:19.186 09:43:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.186 09:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 INFO: Success 00:06:19.186 09:43:44 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:19.186 09:43:44 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:19.186 00:06:19.186 real 0m8.686s 00:06:19.186 user 0m12.336s 00:06:19.186 sys 0m1.862s 00:06:19.186 ************************************ 00:06:19.186 END TEST json_config 00:06:19.186 ************************************ 00:06:19.186 09:43:44 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.186 09:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 09:43:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:19.186 09:43:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.446 09:43:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.446 09:43:44 -- common/autotest_common.sh@10 -- # set +x 00:06:19.446 ************************************ 00:06:19.446 START TEST json_config_extra_key 00:06:19.446 ************************************ 00:06:19.446 09:43:44 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:19.446 09:43:44 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.446 09:43:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.446 09:43:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.446 09:43:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.446 09:43:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.446 09:43:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.446 09:43:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.446 09:43:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.446 09:43:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.446 09:43:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:19.447 09:43:44 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.447 09:43:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.447 --rc genhtml_branch_coverage=1 00:06:19.447 --rc genhtml_function_coverage=1 00:06:19.447 --rc genhtml_legend=1 00:06:19.447 --rc geninfo_all_blocks=1 00:06:19.447 --rc geninfo_unexecuted_blocks=1 00:06:19.447 00:06:19.447 ' 00:06:19.447 09:43:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.447 --rc genhtml_branch_coverage=1 00:06:19.447 --rc genhtml_function_coverage=1 00:06:19.447 --rc genhtml_legend=1 00:06:19.447 --rc geninfo_all_blocks=1 00:06:19.447 --rc geninfo_unexecuted_blocks=1 00:06:19.447 00:06:19.447 ' 00:06:19.447 09:43:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.447 --rc genhtml_branch_coverage=1 00:06:19.447 --rc genhtml_function_coverage=1 00:06:19.447 --rc genhtml_legend=1 00:06:19.447 --rc geninfo_all_blocks=1 00:06:19.447 --rc geninfo_unexecuted_blocks=1 00:06:19.447 00:06:19.447 ' 00:06:19.447 09:43:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.447 --rc genhtml_branch_coverage=1 00:06:19.447 --rc genhtml_function_coverage=1 00:06:19.447 --rc genhtml_legend=1 00:06:19.447 --rc geninfo_all_blocks=1 00:06:19.447 --rc geninfo_unexecuted_blocks=1 00:06:19.447 00:06:19.447 ' 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.447 09:43:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.447 09:43:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.447 09:43:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.447 09:43:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.447 09:43:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:19.447 09:43:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.447 09:43:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:19.447 INFO: launching applications... 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:19.447 09:43:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57602 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:19.447 Waiting for target to run... 00:06:19.447 09:43:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57602 /var/tmp/spdk_tgt.sock 00:06:19.447 09:43:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57602 ']' 00:06:19.448 09:43:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.448 09:43:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.448 09:43:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.448 09:43:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.448 09:43:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.707 [2024-12-06 09:43:44.726518] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:19.707 [2024-12-06 09:43:44.726848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57602 ] 00:06:19.965 [2024-12-06 09:43:45.162588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.965 [2024-12-06 09:43:45.196352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.965 [2024-12-06 09:43:45.226593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.532 00:06:20.532 INFO: shutting down applications... 00:06:20.532 09:43:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.532 09:43:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:20.532 09:43:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:20.532 09:43:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57602 ]] 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57602 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57602 00:06:20.532 09:43:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.098 09:43:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.098 09:43:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.098 09:43:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57602 00:06:21.098 09:43:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.098 09:43:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:21.098 09:43:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.098 09:43:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.098 SPDK target shutdown done 00:06:21.098 09:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:21.098 Success 00:06:21.098 00:06:21.098 real 0m1.794s 00:06:21.098 user 0m1.703s 00:06:21.098 sys 0m0.457s 00:06:21.098 09:43:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.098 09:43:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.098 ************************************ 00:06:21.098 END TEST json_config_extra_key 00:06:21.098 ************************************ 00:06:21.098 09:43:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:21.098 09:43:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.098 09:43:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.098 09:43:46 -- common/autotest_common.sh@10 -- # set +x 00:06:21.098 ************************************ 00:06:21.098 START TEST alias_rpc 00:06:21.098 ************************************ 00:06:21.098 09:43:46 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:21.357 * Looking for test storage... 00:06:21.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:21.357 09:43:46 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.357 09:43:46 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.357 09:43:46 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.357 09:43:46 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.357 09:43:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.358 09:43:46 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.358 --rc genhtml_branch_coverage=1 00:06:21.358 --rc genhtml_function_coverage=1 00:06:21.358 --rc genhtml_legend=1 00:06:21.358 --rc geninfo_all_blocks=1 00:06:21.358 --rc geninfo_unexecuted_blocks=1 00:06:21.358 00:06:21.358 ' 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.358 --rc genhtml_branch_coverage=1 00:06:21.358 --rc genhtml_function_coverage=1 00:06:21.358 --rc genhtml_legend=1 00:06:21.358 --rc geninfo_all_blocks=1 00:06:21.358 --rc geninfo_unexecuted_blocks=1 00:06:21.358 00:06:21.358 ' 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.358 --rc genhtml_branch_coverage=1 00:06:21.358 --rc genhtml_function_coverage=1 00:06:21.358 --rc genhtml_legend=1 00:06:21.358 --rc geninfo_all_blocks=1 00:06:21.358 --rc geninfo_unexecuted_blocks=1 00:06:21.358 00:06:21.358 ' 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.358 --rc genhtml_branch_coverage=1 00:06:21.358 --rc genhtml_function_coverage=1 00:06:21.358 --rc genhtml_legend=1 00:06:21.358 --rc geninfo_all_blocks=1 00:06:21.358 --rc geninfo_unexecuted_blocks=1 00:06:21.358 00:06:21.358 ' 00:06:21.358 09:43:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:21.358 09:43:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57675 00:06:21.358 09:43:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.358 09:43:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57675 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57675 ']' 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.358 09:43:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.358 [2024-12-06 09:43:46.583674] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:21.358 [2024-12-06 09:43:46.583766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57675 ] 00:06:21.618 [2024-12-06 09:43:46.730016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.618 [2024-12-06 09:43:46.770697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.618 [2024-12-06 09:43:46.837694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.941 09:43:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.941 09:43:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.941 09:43:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:22.257 09:43:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57675 00:06:22.257 09:43:47 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57675 ']' 00:06:22.257 09:43:47 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57675 00:06:22.257 09:43:47 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:22.257 09:43:47 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.257 09:43:47 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57675 00:06:22.257 killing process with pid 57675 00:06:22.257 09:43:47 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.257 09:43:47 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.258 09:43:47 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57675' 00:06:22.258 09:43:47 alias_rpc -- common/autotest_common.sh@973 -- # kill 57675 00:06:22.258 09:43:47 alias_rpc -- common/autotest_common.sh@978 -- # wait 57675 00:06:22.516 ************************************ 00:06:22.516 END TEST alias_rpc 00:06:22.516 ************************************ 00:06:22.516 00:06:22.516 real 0m1.444s 00:06:22.516 user 0m1.523s 00:06:22.516 sys 0m0.445s 00:06:22.516 09:43:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.516 09:43:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.777 09:43:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:22.777 09:43:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:22.777 09:43:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.777 09:43:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.777 09:43:47 -- common/autotest_common.sh@10 -- # set +x 00:06:22.777 ************************************ 00:06:22.777 START TEST spdkcli_tcp 00:06:22.777 ************************************ 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:22.777 * Looking for test storage... 00:06:22.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.777 09:43:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.777 --rc genhtml_branch_coverage=1 00:06:22.777 --rc genhtml_function_coverage=1 00:06:22.777 --rc genhtml_legend=1 00:06:22.777 --rc geninfo_all_blocks=1 00:06:22.777 --rc geninfo_unexecuted_blocks=1 00:06:22.777 00:06:22.777 ' 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.777 --rc genhtml_branch_coverage=1 00:06:22.777 --rc genhtml_function_coverage=1 00:06:22.777 --rc genhtml_legend=1 00:06:22.777 --rc geninfo_all_blocks=1 00:06:22.777 --rc geninfo_unexecuted_blocks=1 00:06:22.777 00:06:22.777 ' 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.777 --rc genhtml_branch_coverage=1 00:06:22.777 --rc genhtml_function_coverage=1 00:06:22.777 --rc genhtml_legend=1 00:06:22.777 --rc geninfo_all_blocks=1 00:06:22.777 --rc geninfo_unexecuted_blocks=1 00:06:22.777 00:06:22.777 ' 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.777 --rc genhtml_branch_coverage=1 00:06:22.777 --rc genhtml_function_coverage=1 00:06:22.777 --rc genhtml_legend=1 00:06:22.777 --rc geninfo_all_blocks=1 00:06:22.777 --rc geninfo_unexecuted_blocks=1 00:06:22.777 00:06:22.777 ' 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57751 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57751 00:06:22.777 09:43:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:22.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57751 ']' 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.777 09:43:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.037 [2024-12-06 09:43:48.057349] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:23.037 [2024-12-06 09:43:48.057680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57751 ] 00:06:23.037 [2024-12-06 09:43:48.203383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.037 [2024-12-06 09:43:48.251060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.037 [2024-12-06 09:43:48.251073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.296 [2024-12-06 09:43:48.319291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.296 09:43:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.296 09:43:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:23.296 09:43:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57761 00:06:23.296 09:43:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:23.296 09:43:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:23.557 [ 00:06:23.557 "bdev_malloc_delete", 00:06:23.557 "bdev_malloc_create", 00:06:23.557 "bdev_null_resize", 00:06:23.557 "bdev_null_delete", 00:06:23.557 "bdev_null_create", 00:06:23.557 "bdev_nvme_cuse_unregister", 00:06:23.557 "bdev_nvme_cuse_register", 00:06:23.557 "bdev_opal_new_user", 00:06:23.557 "bdev_opal_set_lock_state", 00:06:23.557 "bdev_opal_delete", 00:06:23.557 "bdev_opal_get_info", 00:06:23.557 "bdev_opal_create", 00:06:23.557 "bdev_nvme_opal_revert", 00:06:23.557 "bdev_nvme_opal_init", 00:06:23.557 "bdev_nvme_send_cmd", 00:06:23.557 "bdev_nvme_set_keys", 00:06:23.557 "bdev_nvme_get_path_iostat", 00:06:23.557 "bdev_nvme_get_mdns_discovery_info", 00:06:23.557 "bdev_nvme_stop_mdns_discovery", 00:06:23.557 "bdev_nvme_start_mdns_discovery", 00:06:23.557 "bdev_nvme_set_multipath_policy", 00:06:23.557 "bdev_nvme_set_preferred_path", 00:06:23.557 "bdev_nvme_get_io_paths", 00:06:23.557 "bdev_nvme_remove_error_injection", 00:06:23.557 "bdev_nvme_add_error_injection", 00:06:23.557 "bdev_nvme_get_discovery_info", 00:06:23.557 "bdev_nvme_stop_discovery", 00:06:23.557 "bdev_nvme_start_discovery", 00:06:23.557 "bdev_nvme_get_controller_health_info", 00:06:23.557 "bdev_nvme_disable_controller", 00:06:23.557 "bdev_nvme_enable_controller", 00:06:23.557 "bdev_nvme_reset_controller", 00:06:23.557 "bdev_nvme_get_transport_statistics", 00:06:23.557 "bdev_nvme_apply_firmware", 00:06:23.557 "bdev_nvme_detach_controller", 00:06:23.557 "bdev_nvme_get_controllers", 00:06:23.557 "bdev_nvme_attach_controller", 00:06:23.557 "bdev_nvme_set_hotplug", 00:06:23.557 "bdev_nvme_set_options", 00:06:23.557 "bdev_passthru_delete", 00:06:23.557 "bdev_passthru_create", 00:06:23.557 "bdev_lvol_set_parent_bdev", 00:06:23.557 "bdev_lvol_set_parent", 00:06:23.557 "bdev_lvol_check_shallow_copy", 00:06:23.557 "bdev_lvol_start_shallow_copy", 00:06:23.557 "bdev_lvol_grow_lvstore", 00:06:23.557 "bdev_lvol_get_lvols", 00:06:23.557 "bdev_lvol_get_lvstores", 00:06:23.557 "bdev_lvol_delete", 00:06:23.557 "bdev_lvol_set_read_only", 00:06:23.557 "bdev_lvol_resize", 00:06:23.557 "bdev_lvol_decouple_parent", 00:06:23.557 "bdev_lvol_inflate", 00:06:23.557 "bdev_lvol_rename", 00:06:23.557 "bdev_lvol_clone_bdev", 00:06:23.557 "bdev_lvol_clone", 00:06:23.557 "bdev_lvol_snapshot", 00:06:23.557 "bdev_lvol_create", 00:06:23.557 "bdev_lvol_delete_lvstore", 00:06:23.557 "bdev_lvol_rename_lvstore", 00:06:23.557 "bdev_lvol_create_lvstore", 00:06:23.557 "bdev_raid_set_options", 00:06:23.557 "bdev_raid_remove_base_bdev", 00:06:23.557 "bdev_raid_add_base_bdev", 00:06:23.557 "bdev_raid_delete", 00:06:23.557 "bdev_raid_create", 00:06:23.557 "bdev_raid_get_bdevs", 00:06:23.557 "bdev_error_inject_error", 00:06:23.557 "bdev_error_delete", 00:06:23.557 "bdev_error_create", 00:06:23.557 "bdev_split_delete", 00:06:23.557 "bdev_split_create", 00:06:23.557 "bdev_delay_delete", 00:06:23.557 "bdev_delay_create", 00:06:23.557 "bdev_delay_update_latency", 00:06:23.557 "bdev_zone_block_delete", 00:06:23.557 "bdev_zone_block_create", 00:06:23.557 "blobfs_create", 00:06:23.557 "blobfs_detect", 00:06:23.557 "blobfs_set_cache_size", 00:06:23.557 "bdev_aio_delete", 00:06:23.557 "bdev_aio_rescan", 00:06:23.557 "bdev_aio_create", 00:06:23.557 "bdev_ftl_set_property", 00:06:23.557 "bdev_ftl_get_properties", 00:06:23.557 "bdev_ftl_get_stats", 00:06:23.557 "bdev_ftl_unmap", 00:06:23.557 "bdev_ftl_unload", 00:06:23.557 "bdev_ftl_delete", 00:06:23.557 "bdev_ftl_load", 00:06:23.557 "bdev_ftl_create", 00:06:23.557 "bdev_virtio_attach_controller", 00:06:23.557 "bdev_virtio_scsi_get_devices", 00:06:23.557 "bdev_virtio_detach_controller", 00:06:23.557 "bdev_virtio_blk_set_hotplug", 00:06:23.557 "bdev_iscsi_delete", 00:06:23.557 "bdev_iscsi_create", 00:06:23.557 "bdev_iscsi_set_options", 00:06:23.557 "bdev_uring_delete", 00:06:23.557 "bdev_uring_rescan", 00:06:23.557 "bdev_uring_create", 00:06:23.557 "accel_error_inject_error", 00:06:23.557 "ioat_scan_accel_module", 00:06:23.557 "dsa_scan_accel_module", 00:06:23.557 "iaa_scan_accel_module", 00:06:23.557 "keyring_file_remove_key", 00:06:23.557 "keyring_file_add_key", 00:06:23.557 "keyring_linux_set_options", 00:06:23.557 "fsdev_aio_delete", 00:06:23.557 "fsdev_aio_create", 00:06:23.557 "iscsi_get_histogram", 00:06:23.557 "iscsi_enable_histogram", 00:06:23.557 "iscsi_set_options", 00:06:23.557 "iscsi_get_auth_groups", 00:06:23.557 "iscsi_auth_group_remove_secret", 00:06:23.557 "iscsi_auth_group_add_secret", 00:06:23.557 "iscsi_delete_auth_group", 00:06:23.557 "iscsi_create_auth_group", 00:06:23.557 "iscsi_set_discovery_auth", 00:06:23.557 "iscsi_get_options", 00:06:23.557 "iscsi_target_node_request_logout", 00:06:23.557 "iscsi_target_node_set_redirect", 00:06:23.557 "iscsi_target_node_set_auth", 00:06:23.557 "iscsi_target_node_add_lun", 00:06:23.557 "iscsi_get_stats", 00:06:23.557 "iscsi_get_connections", 00:06:23.557 "iscsi_portal_group_set_auth", 00:06:23.557 "iscsi_start_portal_group", 00:06:23.557 "iscsi_delete_portal_group", 00:06:23.557 "iscsi_create_portal_group", 00:06:23.557 "iscsi_get_portal_groups", 00:06:23.557 "iscsi_delete_target_node", 00:06:23.557 "iscsi_target_node_remove_pg_ig_maps", 00:06:23.557 "iscsi_target_node_add_pg_ig_maps", 00:06:23.557 "iscsi_create_target_node", 00:06:23.557 "iscsi_get_target_nodes", 00:06:23.557 "iscsi_delete_initiator_group", 00:06:23.557 "iscsi_initiator_group_remove_initiators", 00:06:23.557 "iscsi_initiator_group_add_initiators", 00:06:23.557 "iscsi_create_initiator_group", 00:06:23.557 "iscsi_get_initiator_groups", 00:06:23.557 "nvmf_set_crdt", 00:06:23.557 "nvmf_set_config", 00:06:23.557 "nvmf_set_max_subsystems", 00:06:23.557 "nvmf_stop_mdns_prr", 00:06:23.557 "nvmf_publish_mdns_prr", 00:06:23.557 "nvmf_subsystem_get_listeners", 00:06:23.557 "nvmf_subsystem_get_qpairs", 00:06:23.557 "nvmf_subsystem_get_controllers", 00:06:23.557 "nvmf_get_stats", 00:06:23.557 "nvmf_get_transports", 00:06:23.557 "nvmf_create_transport", 00:06:23.557 "nvmf_get_targets", 00:06:23.557 "nvmf_delete_target", 00:06:23.557 "nvmf_create_target", 00:06:23.557 "nvmf_subsystem_allow_any_host", 00:06:23.557 "nvmf_subsystem_set_keys", 00:06:23.557 "nvmf_subsystem_remove_host", 00:06:23.557 "nvmf_subsystem_add_host", 00:06:23.557 "nvmf_ns_remove_host", 00:06:23.557 "nvmf_ns_add_host", 00:06:23.557 "nvmf_subsystem_remove_ns", 00:06:23.557 "nvmf_subsystem_set_ns_ana_group", 00:06:23.557 "nvmf_subsystem_add_ns", 00:06:23.557 "nvmf_subsystem_listener_set_ana_state", 00:06:23.557 "nvmf_discovery_get_referrals", 00:06:23.557 "nvmf_discovery_remove_referral", 00:06:23.557 "nvmf_discovery_add_referral", 00:06:23.557 "nvmf_subsystem_remove_listener", 00:06:23.557 "nvmf_subsystem_add_listener", 00:06:23.557 "nvmf_delete_subsystem", 00:06:23.557 "nvmf_create_subsystem", 00:06:23.557 "nvmf_get_subsystems", 00:06:23.557 "env_dpdk_get_mem_stats", 00:06:23.557 "nbd_get_disks", 00:06:23.557 "nbd_stop_disk", 00:06:23.557 "nbd_start_disk", 00:06:23.557 "ublk_recover_disk", 00:06:23.557 "ublk_get_disks", 00:06:23.557 "ublk_stop_disk", 00:06:23.557 "ublk_start_disk", 00:06:23.557 "ublk_destroy_target", 00:06:23.557 "ublk_create_target", 00:06:23.557 "virtio_blk_create_transport", 00:06:23.557 "virtio_blk_get_transports", 00:06:23.557 "vhost_controller_set_coalescing", 00:06:23.557 "vhost_get_controllers", 00:06:23.557 "vhost_delete_controller", 00:06:23.557 "vhost_create_blk_controller", 00:06:23.557 "vhost_scsi_controller_remove_target", 00:06:23.557 "vhost_scsi_controller_add_target", 00:06:23.557 "vhost_start_scsi_controller", 00:06:23.557 "vhost_create_scsi_controller", 00:06:23.557 "thread_set_cpumask", 00:06:23.557 "scheduler_set_options", 00:06:23.557 "framework_get_governor", 00:06:23.557 "framework_get_scheduler", 00:06:23.557 "framework_set_scheduler", 00:06:23.557 "framework_get_reactors", 00:06:23.557 "thread_get_io_channels", 00:06:23.557 "thread_get_pollers", 00:06:23.557 "thread_get_stats", 00:06:23.557 "framework_monitor_context_switch", 00:06:23.557 "spdk_kill_instance", 00:06:23.557 "log_enable_timestamps", 00:06:23.557 "log_get_flags", 00:06:23.557 "log_clear_flag", 00:06:23.557 "log_set_flag", 00:06:23.557 "log_get_level", 00:06:23.557 "log_set_level", 00:06:23.557 "log_get_print_level", 00:06:23.557 "log_set_print_level", 00:06:23.557 "framework_enable_cpumask_locks", 00:06:23.557 "framework_disable_cpumask_locks", 00:06:23.557 "framework_wait_init", 00:06:23.557 "framework_start_init", 00:06:23.557 "scsi_get_devices", 00:06:23.557 "bdev_get_histogram", 00:06:23.557 "bdev_enable_histogram", 00:06:23.557 "bdev_set_qos_limit", 00:06:23.557 "bdev_set_qd_sampling_period", 00:06:23.557 "bdev_get_bdevs", 00:06:23.557 "bdev_reset_iostat", 00:06:23.557 "bdev_get_iostat", 00:06:23.557 "bdev_examine", 00:06:23.557 "bdev_wait_for_examine", 00:06:23.557 "bdev_set_options", 00:06:23.557 "accel_get_stats", 00:06:23.557 "accel_set_options", 00:06:23.557 "accel_set_driver", 00:06:23.558 "accel_crypto_key_destroy", 00:06:23.558 "accel_crypto_keys_get", 00:06:23.558 "accel_crypto_key_create", 00:06:23.558 "accel_assign_opc", 00:06:23.558 "accel_get_module_info", 00:06:23.558 "accel_get_opc_assignments", 00:06:23.558 "vmd_rescan", 00:06:23.558 "vmd_remove_device", 00:06:23.558 "vmd_enable", 00:06:23.558 "sock_get_default_impl", 00:06:23.558 "sock_set_default_impl", 00:06:23.558 "sock_impl_set_options", 00:06:23.558 "sock_impl_get_options", 00:06:23.558 "iobuf_get_stats", 00:06:23.558 "iobuf_set_options", 00:06:23.558 "keyring_get_keys", 00:06:23.558 "framework_get_pci_devices", 00:06:23.558 "framework_get_config", 00:06:23.558 "framework_get_subsystems", 00:06:23.558 "fsdev_set_opts", 00:06:23.558 "fsdev_get_opts", 00:06:23.558 "trace_get_info", 00:06:23.558 "trace_get_tpoint_group_mask", 00:06:23.558 "trace_disable_tpoint_group", 00:06:23.558 "trace_enable_tpoint_group", 00:06:23.558 "trace_clear_tpoint_mask", 00:06:23.558 "trace_set_tpoint_mask", 00:06:23.558 "notify_get_notifications", 00:06:23.558 "notify_get_types", 00:06:23.558 "spdk_get_version", 00:06:23.558 "rpc_get_methods" 00:06:23.558 ] 00:06:23.558 09:43:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:23.558 09:43:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.558 09:43:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.818 09:43:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:23.818 09:43:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57751 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57751 ']' 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57751 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57751 00:06:23.818 killing process with pid 57751 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57751' 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57751 00:06:23.818 09:43:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57751 00:06:24.078 ************************************ 00:06:24.078 END TEST spdkcli_tcp 00:06:24.078 ************************************ 00:06:24.078 00:06:24.078 real 0m1.455s 00:06:24.078 user 0m2.466s 00:06:24.078 sys 0m0.492s 00:06:24.078 09:43:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.078 09:43:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.078 09:43:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.078 09:43:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.078 09:43:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.078 09:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:24.078 ************************************ 00:06:24.078 START TEST dpdk_mem_utility 00:06:24.078 ************************************ 00:06:24.078 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.336 * Looking for test storage... 00:06:24.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:24.336 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.336 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.336 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.336 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.336 09:43:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:24.336 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.336 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.336 --rc genhtml_branch_coverage=1 00:06:24.337 --rc genhtml_function_coverage=1 00:06:24.337 --rc genhtml_legend=1 00:06:24.337 --rc geninfo_all_blocks=1 00:06:24.337 --rc geninfo_unexecuted_blocks=1 00:06:24.337 00:06:24.337 ' 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.337 --rc genhtml_branch_coverage=1 00:06:24.337 --rc genhtml_function_coverage=1 00:06:24.337 --rc genhtml_legend=1 00:06:24.337 --rc geninfo_all_blocks=1 00:06:24.337 --rc geninfo_unexecuted_blocks=1 00:06:24.337 00:06:24.337 ' 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.337 --rc genhtml_branch_coverage=1 00:06:24.337 --rc genhtml_function_coverage=1 00:06:24.337 --rc genhtml_legend=1 00:06:24.337 --rc geninfo_all_blocks=1 00:06:24.337 --rc geninfo_unexecuted_blocks=1 00:06:24.337 00:06:24.337 ' 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.337 --rc genhtml_branch_coverage=1 00:06:24.337 --rc genhtml_function_coverage=1 00:06:24.337 --rc genhtml_legend=1 00:06:24.337 --rc geninfo_all_blocks=1 00:06:24.337 --rc geninfo_unexecuted_blocks=1 00:06:24.337 00:06:24.337 ' 00:06:24.337 09:43:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:24.337 09:43:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57843 00:06:24.337 09:43:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.337 09:43:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57843 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57843 ']' 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.337 09:43:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.595 [2024-12-06 09:43:49.629534] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:24.595 [2024-12-06 09:43:49.629881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57843 ] 00:06:24.595 [2024-12-06 09:43:49.776036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.595 [2024-12-06 09:43:49.819177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.853 [2024-12-06 09:43:49.889838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.853 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.853 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:24.853 09:43:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:24.853 09:43:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:24.853 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.853 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.853 { 00:06:24.853 "filename": "/tmp/spdk_mem_dump.txt" 00:06:24.853 } 00:06:24.853 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.853 09:43:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:25.113 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:25.113 1 heaps totaling size 818.000000 MiB 00:06:25.113 size: 818.000000 MiB heap id: 0 00:06:25.113 end heaps---------- 00:06:25.113 9 mempools totaling size 603.782043 MiB 00:06:25.113 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:25.113 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:25.113 size: 100.555481 MiB name: bdev_io_57843 00:06:25.113 size: 50.003479 MiB name: msgpool_57843 00:06:25.113 size: 36.509338 MiB name: fsdev_io_57843 00:06:25.113 size: 21.763794 MiB name: PDU_Pool 00:06:25.113 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:25.113 size: 4.133484 MiB name: evtpool_57843 00:06:25.113 size: 0.026123 MiB name: Session_Pool 00:06:25.113 end mempools------- 00:06:25.113 6 memzones totaling size 4.142822 MiB 00:06:25.113 size: 1.000366 MiB name: RG_ring_0_57843 00:06:25.113 size: 1.000366 MiB name: RG_ring_1_57843 00:06:25.113 size: 1.000366 MiB name: RG_ring_4_57843 00:06:25.113 size: 1.000366 MiB name: RG_ring_5_57843 00:06:25.113 size: 0.125366 MiB name: RG_ring_2_57843 00:06:25.113 size: 0.015991 MiB name: RG_ring_3_57843 00:06:25.113 end memzones------- 00:06:25.113 09:43:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:25.113 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:06:25.113 list of free elements. size: 10.803223 MiB 00:06:25.113 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:25.113 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:25.113 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:25.113 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:25.113 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:25.113 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:25.113 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:25.113 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:25.113 element at address: 0x20001ae00000 with size: 0.568237 MiB 00:06:25.113 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:25.113 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:25.113 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:25.113 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:25.113 element at address: 0x200028200000 with size: 0.395935 MiB 00:06:25.113 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:25.114 list of standard malloc elements. size: 199.267883 MiB 00:06:25.114 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:25.114 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:25.114 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:25.114 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:25.114 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:25.114 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:25.114 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:25.114 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:25.114 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:25.114 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:25.114 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:25.114 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:25.114 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:25.115 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x200028265680 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c280 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:25.115 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:25.116 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:25.116 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:25.116 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:25.116 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:25.116 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:25.116 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:25.116 list of memzone associated elements. size: 607.928894 MiB 00:06:25.116 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:25.116 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:25.116 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:25.116 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:25.116 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:25.116 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57843_0 00:06:25.116 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:25.116 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57843_0 00:06:25.116 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:25.116 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57843_0 00:06:25.116 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:25.116 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:25.116 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:25.116 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:25.116 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:25.116 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57843_0 00:06:25.116 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:25.116 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57843 00:06:25.116 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:25.116 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57843 00:06:25.116 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:25.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:25.116 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:25.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:25.116 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:25.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:25.116 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:25.116 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:25.116 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:25.116 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57843 00:06:25.116 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:25.116 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57843 00:06:25.116 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:25.116 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57843 00:06:25.116 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:25.116 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57843 00:06:25.116 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:25.116 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57843 00:06:25.116 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:25.116 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57843 00:06:25.116 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:25.116 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:25.116 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:25.116 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:25.116 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:25.116 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:25.116 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:25.116 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57843 00:06:25.116 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:25.116 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57843 00:06:25.116 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:25.116 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:25.116 element at address: 0x200028265740 with size: 0.023743 MiB 00:06:25.116 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:25.116 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:25.116 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57843 00:06:25.116 element at address: 0x20002826b880 with size: 0.002441 MiB 00:06:25.116 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:25.116 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:25.116 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57843 00:06:25.116 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:25.116 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57843 00:06:25.116 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:25.116 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57843 00:06:25.116 element at address: 0x20002826c340 with size: 0.000305 MiB 00:06:25.116 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:25.116 09:43:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:25.116 09:43:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57843 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57843 ']' 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57843 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57843 00:06:25.116 killing process with pid 57843 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57843' 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57843 00:06:25.116 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57843 00:06:25.375 00:06:25.375 real 0m1.303s 00:06:25.375 user 0m1.223s 00:06:25.375 sys 0m0.425s 00:06:25.375 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.375 09:43:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.375 ************************************ 00:06:25.375 END TEST dpdk_mem_utility 00:06:25.375 ************************************ 00:06:25.634 09:43:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:25.634 09:43:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.634 09:43:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.634 09:43:50 -- common/autotest_common.sh@10 -- # set +x 00:06:25.634 ************************************ 00:06:25.634 START TEST event 00:06:25.634 ************************************ 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:25.634 * Looking for test storage... 00:06:25.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.634 09:43:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.634 09:43:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.634 09:43:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.634 09:43:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.634 09:43:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.634 09:43:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.634 09:43:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.634 09:43:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.634 09:43:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.634 09:43:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.634 09:43:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.634 09:43:50 event -- scripts/common.sh@344 -- # case "$op" in 00:06:25.634 09:43:50 event -- scripts/common.sh@345 -- # : 1 00:06:25.634 09:43:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.634 09:43:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.634 09:43:50 event -- scripts/common.sh@365 -- # decimal 1 00:06:25.634 09:43:50 event -- scripts/common.sh@353 -- # local d=1 00:06:25.634 09:43:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.634 09:43:50 event -- scripts/common.sh@355 -- # echo 1 00:06:25.634 09:43:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.634 09:43:50 event -- scripts/common.sh@366 -- # decimal 2 00:06:25.634 09:43:50 event -- scripts/common.sh@353 -- # local d=2 00:06:25.634 09:43:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.634 09:43:50 event -- scripts/common.sh@355 -- # echo 2 00:06:25.634 09:43:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.634 09:43:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.634 09:43:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.634 09:43:50 event -- scripts/common.sh@368 -- # return 0 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.634 --rc genhtml_branch_coverage=1 00:06:25.634 --rc genhtml_function_coverage=1 00:06:25.634 --rc genhtml_legend=1 00:06:25.634 --rc geninfo_all_blocks=1 00:06:25.634 --rc geninfo_unexecuted_blocks=1 00:06:25.634 00:06:25.634 ' 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.634 --rc genhtml_branch_coverage=1 00:06:25.634 --rc genhtml_function_coverage=1 00:06:25.634 --rc genhtml_legend=1 00:06:25.634 --rc geninfo_all_blocks=1 00:06:25.634 --rc geninfo_unexecuted_blocks=1 00:06:25.634 00:06:25.634 ' 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.634 --rc genhtml_branch_coverage=1 00:06:25.634 --rc genhtml_function_coverage=1 00:06:25.634 --rc genhtml_legend=1 00:06:25.634 --rc geninfo_all_blocks=1 00:06:25.634 --rc geninfo_unexecuted_blocks=1 00:06:25.634 00:06:25.634 ' 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.634 --rc genhtml_branch_coverage=1 00:06:25.634 --rc genhtml_function_coverage=1 00:06:25.634 --rc genhtml_legend=1 00:06:25.634 --rc geninfo_all_blocks=1 00:06:25.634 --rc geninfo_unexecuted_blocks=1 00:06:25.634 00:06:25.634 ' 00:06:25.634 09:43:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:25.634 09:43:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.634 09:43:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:25.634 09:43:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.634 09:43:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.634 ************************************ 00:06:25.634 START TEST event_perf 00:06:25.634 ************************************ 00:06:25.634 09:43:50 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.634 Running I/O for 1 seconds...[2024-12-06 09:43:50.860309] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:25.635 [2024-12-06 09:43:50.860512] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57920 ] 00:06:25.893 [2024-12-06 09:43:51.000793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.893 [2024-12-06 09:43:51.057475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.893 [2024-12-06 09:43:51.057647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.893 [2024-12-06 09:43:51.057792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.893 [2024-12-06 09:43:51.057797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.829 Running I/O for 1 seconds... 00:06:26.829 lcore 0: 154947 00:06:26.829 lcore 1: 154948 00:06:26.829 lcore 2: 154944 00:06:26.829 lcore 3: 154945 00:06:27.088 done. 00:06:27.088 ************************************ 00:06:27.088 END TEST event_perf 00:06:27.088 ************************************ 00:06:27.088 00:06:27.088 real 0m1.259s 00:06:27.088 user 0m4.084s 00:06:27.088 sys 0m0.053s 00:06:27.088 09:43:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.088 09:43:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.088 09:43:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:27.088 09:43:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:27.088 09:43:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.088 09:43:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.088 ************************************ 00:06:27.088 START TEST event_reactor 00:06:27.088 ************************************ 00:06:27.088 09:43:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:27.088 [2024-12-06 09:43:52.177822] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:27.088 [2024-12-06 09:43:52.178160] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57959 ] 00:06:27.088 [2024-12-06 09:43:52.319501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.346 [2024-12-06 09:43:52.367204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.280 test_start 00:06:28.280 oneshot 00:06:28.280 tick 100 00:06:28.280 tick 100 00:06:28.280 tick 250 00:06:28.280 tick 100 00:06:28.280 tick 100 00:06:28.280 tick 100 00:06:28.280 tick 250 00:06:28.280 tick 500 00:06:28.280 tick 100 00:06:28.280 tick 100 00:06:28.280 tick 250 00:06:28.280 tick 100 00:06:28.280 tick 100 00:06:28.280 test_end 00:06:28.280 00:06:28.280 real 0m1.251s 00:06:28.280 user 0m1.105s 00:06:28.280 sys 0m0.040s 00:06:28.280 09:43:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.280 09:43:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:28.280 ************************************ 00:06:28.280 END TEST event_reactor 00:06:28.280 ************************************ 00:06:28.281 09:43:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.281 09:43:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:28.281 09:43:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.281 09:43:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.281 ************************************ 00:06:28.281 START TEST event_reactor_perf 00:06:28.281 ************************************ 00:06:28.281 09:43:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.281 [2024-12-06 09:43:53.479774] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:28.281 [2024-12-06 09:43:53.479858] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57989 ] 00:06:28.538 [2024-12-06 09:43:53.617653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.538 [2024-12-06 09:43:53.661395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.474 test_start 00:06:29.474 test_end 00:06:29.474 Performance: 446207 events per second 00:06:29.474 00:06:29.474 real 0m1.243s 00:06:29.474 user 0m1.106s 00:06:29.474 sys 0m0.032s 00:06:29.474 09:43:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.474 ************************************ 00:06:29.474 09:43:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.474 END TEST event_reactor_perf 00:06:29.474 ************************************ 00:06:29.734 09:43:54 event -- event/event.sh@49 -- # uname -s 00:06:29.734 09:43:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:29.734 09:43:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:29.734 09:43:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.735 09:43:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.735 09:43:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.735 ************************************ 00:06:29.735 START TEST event_scheduler 00:06:29.735 ************************************ 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:29.735 * Looking for test storage... 00:06:29.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.735 09:43:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.735 --rc genhtml_branch_coverage=1 00:06:29.735 --rc genhtml_function_coverage=1 00:06:29.735 --rc genhtml_legend=1 00:06:29.735 --rc geninfo_all_blocks=1 00:06:29.735 --rc geninfo_unexecuted_blocks=1 00:06:29.735 00:06:29.735 ' 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.735 --rc genhtml_branch_coverage=1 00:06:29.735 --rc genhtml_function_coverage=1 00:06:29.735 --rc genhtml_legend=1 00:06:29.735 --rc geninfo_all_blocks=1 00:06:29.735 --rc geninfo_unexecuted_blocks=1 00:06:29.735 00:06:29.735 ' 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.735 --rc genhtml_branch_coverage=1 00:06:29.735 --rc genhtml_function_coverage=1 00:06:29.735 --rc genhtml_legend=1 00:06:29.735 --rc geninfo_all_blocks=1 00:06:29.735 --rc geninfo_unexecuted_blocks=1 00:06:29.735 00:06:29.735 ' 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.735 --rc genhtml_branch_coverage=1 00:06:29.735 --rc genhtml_function_coverage=1 00:06:29.735 --rc genhtml_legend=1 00:06:29.735 --rc geninfo_all_blocks=1 00:06:29.735 --rc geninfo_unexecuted_blocks=1 00:06:29.735 00:06:29.735 ' 00:06:29.735 09:43:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:29.735 09:43:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58058 00:06:29.735 09:43:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:29.735 09:43:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.735 09:43:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58058 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58058 ']' 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.735 09:43:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.997 [2024-12-06 09:43:55.015744] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:29.997 [2024-12-06 09:43:55.016270] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58058 ] 00:06:29.997 [2024-12-06 09:43:55.167725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.997 [2024-12-06 09:43:55.234362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.997 [2024-12-06 09:43:55.234529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.997 [2024-12-06 09:43:55.234680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.997 [2024-12-06 09:43:55.234686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.256 09:43:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.256 09:43:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:30.256 09:43:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.256 09:43:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.256 09:43:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.256 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:30.256 POWER: Cannot set governor of lcore 0 to userspace 00:06:30.256 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:30.256 POWER: Cannot set governor of lcore 0 to performance 00:06:30.256 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:30.256 POWER: Cannot set governor of lcore 0 to userspace 00:06:30.256 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:30.256 POWER: Cannot set governor of lcore 0 to userspace 00:06:30.257 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:30.257 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:30.257 POWER: Unable to set Power Management Environment for lcore 0 00:06:30.257 [2024-12-06 09:43:55.292928] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:30.257 [2024-12-06 09:43:55.292940] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:30.257 [2024-12-06 09:43:55.292949] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:30.257 [2024-12-06 09:43:55.292959] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:30.257 [2024-12-06 09:43:55.292988] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:30.257 [2024-12-06 09:43:55.292995] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:30.257 09:43:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.257 09:43:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 [2024-12-06 09:43:55.353339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.257 [2024-12-06 09:43:55.388279] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:30.257 09:43:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:30.257 09:43:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.257 09:43:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 ************************************ 00:06:30.257 START TEST scheduler_create_thread 00:06:30.257 ************************************ 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 2 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 3 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 4 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 5 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 6 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 7 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 8 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 9 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 10 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.257 09:43:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.161 09:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.161 09:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:32.161 09:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:32.161 09:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.161 09:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.100 ************************************ 00:06:33.100 END TEST scheduler_create_thread 00:06:33.100 ************************************ 00:06:33.100 09:43:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.100 00:06:33.100 real 0m2.615s 00:06:33.100 user 0m0.015s 00:06:33.100 sys 0m0.012s 00:06:33.100 09:43:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.100 09:43:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.100 09:43:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:33.100 09:43:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58058 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58058 ']' 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58058 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58058 00:06:33.100 killing process with pid 58058 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58058' 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58058 00:06:33.100 09:43:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58058 00:06:33.359 [2024-12-06 09:43:58.496691] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:33.619 00:06:33.619 real 0m3.933s 00:06:33.619 user 0m5.790s 00:06:33.619 sys 0m0.368s 00:06:33.619 09:43:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.619 ************************************ 00:06:33.619 END TEST event_scheduler 00:06:33.619 ************************************ 00:06:33.619 09:43:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.619 09:43:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:33.619 09:43:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:33.619 09:43:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.619 09:43:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.619 09:43:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.619 ************************************ 00:06:33.619 START TEST app_repeat 00:06:33.619 ************************************ 00:06:33.619 09:43:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58145 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:33.619 Process app_repeat pid: 58145 00:06:33.619 spdk_app_start Round 0 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58145' 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:33.619 09:43:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58145 /var/tmp/spdk-nbd.sock 00:06:33.619 09:43:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58145 ']' 00:06:33.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.619 09:43:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.619 09:43:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.619 09:43:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.619 09:43:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.619 09:43:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.619 [2024-12-06 09:43:58.791142] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:33.619 [2024-12-06 09:43:58.791374] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58145 ] 00:06:33.878 [2024-12-06 09:43:58.929676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.878 [2024-12-06 09:43:58.976269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.878 [2024-12-06 09:43:58.976282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.878 [2024-12-06 09:43:59.031231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.878 09:43:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.878 09:43:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:33.878 09:43:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.138 Malloc0 00:06:34.138 09:43:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.707 Malloc1 00:06:34.707 09:43:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.707 09:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.967 /dev/nbd0 00:06:34.967 09:44:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.967 09:44:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.967 1+0 records in 00:06:34.967 1+0 records out 00:06:34.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032063 s, 12.8 MB/s 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.967 09:44:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.967 09:44:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.967 09:44:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.967 09:44:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.226 /dev/nbd1 00:06:35.226 09:44:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.226 09:44:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.226 1+0 records in 00:06:35.226 1+0 records out 00:06:35.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288009 s, 14.2 MB/s 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.226 09:44:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:35.226 09:44:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.226 09:44:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.226 09:44:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.226 09:44:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.226 09:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.485 { 00:06:35.485 "nbd_device": "/dev/nbd0", 00:06:35.485 "bdev_name": "Malloc0" 00:06:35.485 }, 00:06:35.485 { 00:06:35.485 "nbd_device": "/dev/nbd1", 00:06:35.485 "bdev_name": "Malloc1" 00:06:35.485 } 00:06:35.485 ]' 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.485 { 00:06:35.485 "nbd_device": "/dev/nbd0", 00:06:35.485 "bdev_name": "Malloc0" 00:06:35.485 }, 00:06:35.485 { 00:06:35.485 "nbd_device": "/dev/nbd1", 00:06:35.485 "bdev_name": "Malloc1" 00:06:35.485 } 00:06:35.485 ]' 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.485 /dev/nbd1' 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.485 /dev/nbd1' 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.485 256+0 records in 00:06:35.485 256+0 records out 00:06:35.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00811692 s, 129 MB/s 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.485 256+0 records in 00:06:35.485 256+0 records out 00:06:35.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246597 s, 42.5 MB/s 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.485 09:44:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.744 256+0 records in 00:06:35.744 256+0 records out 00:06:35.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03197 s, 32.8 MB/s 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.744 09:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.003 09:44:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.262 09:44:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.520 09:44:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.520 09:44:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.520 09:44:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.520 09:44:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.520 09:44:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.520 09:44:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.521 09:44:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.521 09:44:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.521 09:44:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.521 09:44:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.521 09:44:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.521 09:44:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.521 09:44:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.779 09:44:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.038 [2024-12-06 09:44:02.173539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.038 [2024-12-06 09:44:02.216394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.038 [2024-12-06 09:44:02.216419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.038 [2024-12-06 09:44:02.274690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.038 [2024-12-06 09:44:02.274784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.038 [2024-12-06 09:44:02.274796] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.326 spdk_app_start Round 1 00:06:40.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.326 09:44:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.326 09:44:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:40.326 09:44:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58145 /var/tmp/spdk-nbd.sock 00:06:40.326 09:44:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58145 ']' 00:06:40.326 09:44:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.326 09:44:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.326 09:44:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.326 09:44:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.326 09:44:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.326 09:44:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.326 09:44:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:40.326 09:44:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.584 Malloc0 00:06:40.584 09:44:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.842 Malloc1 00:06:40.842 09:44:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.842 09:44:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.101 /dev/nbd0 00:06:41.101 09:44:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.101 09:44:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.101 1+0 records in 00:06:41.101 1+0 records out 00:06:41.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586387 s, 7.0 MB/s 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.101 09:44:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.101 09:44:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.101 09:44:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.101 09:44:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.359 /dev/nbd1 00:06:41.359 09:44:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.359 09:44:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.360 1+0 records in 00:06:41.360 1+0 records out 00:06:41.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245213 s, 16.7 MB/s 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.360 09:44:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.360 09:44:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.360 09:44:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.360 09:44:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.360 09:44:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.360 09:44:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.619 09:44:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.619 { 00:06:41.619 "nbd_device": "/dev/nbd0", 00:06:41.619 "bdev_name": "Malloc0" 00:06:41.619 }, 00:06:41.619 { 00:06:41.619 "nbd_device": "/dev/nbd1", 00:06:41.619 "bdev_name": "Malloc1" 00:06:41.619 } 00:06:41.619 ]' 00:06:41.619 09:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.619 { 00:06:41.619 "nbd_device": "/dev/nbd0", 00:06:41.619 "bdev_name": "Malloc0" 00:06:41.619 }, 00:06:41.619 { 00:06:41.619 "nbd_device": "/dev/nbd1", 00:06:41.619 "bdev_name": "Malloc1" 00:06:41.619 } 00:06:41.619 ]' 00:06:41.619 09:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.878 09:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.878 /dev/nbd1' 00:06:41.878 09:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.878 /dev/nbd1' 00:06:41.878 09:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.878 09:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.878 09:44:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.879 256+0 records in 00:06:41.879 256+0 records out 00:06:41.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490994 s, 214 MB/s 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.879 256+0 records in 00:06:41.879 256+0 records out 00:06:41.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247954 s, 42.3 MB/s 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.879 09:44:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.879 256+0 records in 00:06:41.879 256+0 records out 00:06:41.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257562 s, 40.7 MB/s 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.879 09:44:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.138 09:44:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.396 09:44:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.397 09:44:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.656 09:44:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.656 09:44:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.225 09:44:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.225 [2024-12-06 09:44:08.388011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.225 [2024-12-06 09:44:08.453654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.225 [2024-12-06 09:44:08.453665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.510 [2024-12-06 09:44:08.513303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.510 [2024-12-06 09:44:08.513418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.510 [2024-12-06 09:44:08.513430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.127 09:44:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.127 spdk_app_start Round 2 00:06:46.127 09:44:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:46.127 09:44:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58145 /var/tmp/spdk-nbd.sock 00:06:46.127 09:44:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58145 ']' 00:06:46.127 09:44:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.127 09:44:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.127 09:44:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.127 09:44:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.127 09:44:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.387 09:44:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.387 09:44:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.387 09:44:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.646 Malloc0 00:06:46.646 09:44:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.905 Malloc1 00:06:46.905 09:44:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.905 09:44:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.164 /dev/nbd0 00:06:47.164 09:44:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.164 09:44:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.164 1+0 records in 00:06:47.164 1+0 records out 00:06:47.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322268 s, 12.7 MB/s 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.164 09:44:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.424 09:44:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.424 09:44:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.424 09:44:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.424 09:44:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.424 09:44:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.424 /dev/nbd1 00:06:47.683 09:44:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.683 09:44:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.683 1+0 records in 00:06:47.683 1+0 records out 00:06:47.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371357 s, 11.0 MB/s 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.683 09:44:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.683 09:44:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.683 09:44:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.683 09:44:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.683 09:44:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.683 09:44:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.943 { 00:06:47.943 "nbd_device": "/dev/nbd0", 00:06:47.943 "bdev_name": "Malloc0" 00:06:47.943 }, 00:06:47.943 { 00:06:47.943 "nbd_device": "/dev/nbd1", 00:06:47.943 "bdev_name": "Malloc1" 00:06:47.943 } 00:06:47.943 ]' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.943 { 00:06:47.943 "nbd_device": "/dev/nbd0", 00:06:47.943 "bdev_name": "Malloc0" 00:06:47.943 }, 00:06:47.943 { 00:06:47.943 "nbd_device": "/dev/nbd1", 00:06:47.943 "bdev_name": "Malloc1" 00:06:47.943 } 00:06:47.943 ]' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.943 /dev/nbd1' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.943 /dev/nbd1' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.943 256+0 records in 00:06:47.943 256+0 records out 00:06:47.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110379 s, 95.0 MB/s 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.943 256+0 records in 00:06:47.943 256+0 records out 00:06:47.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225581 s, 46.5 MB/s 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.943 256+0 records in 00:06:47.943 256+0 records out 00:06:47.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281979 s, 37.2 MB/s 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.943 09:44:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.203 09:44:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.462 09:44:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.723 09:44:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.983 09:44:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.983 09:44:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:49.551 09:44:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.551 [2024-12-06 09:44:14.693629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.551 [2024-12-06 09:44:14.753557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.551 [2024-12-06 09:44:14.753801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.551 [2024-12-06 09:44:14.808033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.551 [2024-12-06 09:44:14.808150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.551 [2024-12-06 09:44:14.808162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:52.837 09:44:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58145 /var/tmp/spdk-nbd.sock 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58145 ']' 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:52.837 09:44:17 event.app_repeat -- event/event.sh@39 -- # killprocess 58145 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58145 ']' 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58145 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58145 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.837 killing process with pid 58145 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58145' 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58145 00:06:52.837 09:44:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58145 00:06:52.837 spdk_app_start is called in Round 0. 00:06:52.837 Shutdown signal received, stop current app iteration 00:06:52.838 Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 reinitialization... 00:06:52.838 spdk_app_start is called in Round 1. 00:06:52.838 Shutdown signal received, stop current app iteration 00:06:52.838 Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 reinitialization... 00:06:52.838 spdk_app_start is called in Round 2. 00:06:52.838 Shutdown signal received, stop current app iteration 00:06:52.838 Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 reinitialization... 00:06:52.838 spdk_app_start is called in Round 3. 00:06:52.838 Shutdown signal received, stop current app iteration 00:06:52.838 09:44:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:52.838 09:44:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:52.838 00:06:52.838 real 0m19.267s 00:06:52.838 user 0m43.978s 00:06:52.838 sys 0m2.977s 00:06:52.838 09:44:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.838 09:44:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.838 ************************************ 00:06:52.838 END TEST app_repeat 00:06:52.838 ************************************ 00:06:52.838 09:44:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:52.838 09:44:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:52.838 09:44:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.838 09:44:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.838 09:44:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.838 ************************************ 00:06:52.838 START TEST cpu_locks 00:06:52.838 ************************************ 00:06:52.838 09:44:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:53.097 * Looking for test storage... 00:06:53.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:53.097 09:44:18 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.097 09:44:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.097 09:44:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.097 09:44:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:53.097 09:44:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.098 09:44:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:53.098 09:44:18 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.098 09:44:18 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.098 --rc genhtml_branch_coverage=1 00:06:53.098 --rc genhtml_function_coverage=1 00:06:53.098 --rc genhtml_legend=1 00:06:53.098 --rc geninfo_all_blocks=1 00:06:53.098 --rc geninfo_unexecuted_blocks=1 00:06:53.098 00:06:53.098 ' 00:06:53.098 09:44:18 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.098 --rc genhtml_branch_coverage=1 00:06:53.098 --rc genhtml_function_coverage=1 00:06:53.098 --rc genhtml_legend=1 00:06:53.098 --rc geninfo_all_blocks=1 00:06:53.098 --rc geninfo_unexecuted_blocks=1 00:06:53.098 00:06:53.098 ' 00:06:53.098 09:44:18 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.098 --rc genhtml_branch_coverage=1 00:06:53.098 --rc genhtml_function_coverage=1 00:06:53.098 --rc genhtml_legend=1 00:06:53.098 --rc geninfo_all_blocks=1 00:06:53.098 --rc geninfo_unexecuted_blocks=1 00:06:53.098 00:06:53.098 ' 00:06:53.098 09:44:18 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.098 --rc genhtml_branch_coverage=1 00:06:53.098 --rc genhtml_function_coverage=1 00:06:53.098 --rc genhtml_legend=1 00:06:53.098 --rc geninfo_all_blocks=1 00:06:53.098 --rc geninfo_unexecuted_blocks=1 00:06:53.098 00:06:53.098 ' 00:06:53.098 09:44:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:53.098 09:44:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:53.098 09:44:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:53.098 09:44:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:53.098 09:44:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.098 09:44:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.098 09:44:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.098 ************************************ 00:06:53.098 START TEST default_locks 00:06:53.098 ************************************ 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58589 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58589 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58589 ']' 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.098 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.098 [2024-12-06 09:44:18.348642] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:53.098 [2024-12-06 09:44:18.348781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58589 ] 00:06:53.358 [2024-12-06 09:44:18.488889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.358 [2024-12-06 09:44:18.548467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.358 [2024-12-06 09:44:18.623140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.618 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.618 09:44:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:53.618 09:44:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58589 00:06:53.618 09:44:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.618 09:44:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58589 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58589 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58589 ']' 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58589 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58589 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.188 killing process with pid 58589 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58589' 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58589 00:06:54.188 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58589 00:06:54.447 09:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58589 00:06:54.447 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58589 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58589 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58589 ']' 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.448 ERROR: process (pid: 58589) is no longer running 00:06:54.448 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58589) - No such process 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.448 00:06:54.448 real 0m1.412s 00:06:54.448 user 0m1.380s 00:06:54.448 sys 0m0.540s 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.448 09:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.448 ************************************ 00:06:54.448 END TEST default_locks 00:06:54.448 ************************************ 00:06:54.708 09:44:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:54.708 09:44:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.708 09:44:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.708 09:44:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.708 ************************************ 00:06:54.708 START TEST default_locks_via_rpc 00:06:54.708 ************************************ 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58628 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58628 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58628 ']' 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.708 09:44:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.708 [2024-12-06 09:44:19.840132] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:54.708 [2024-12-06 09:44:19.840272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58628 ] 00:06:54.967 [2024-12-06 09:44:19.987829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.967 [2024-12-06 09:44:20.047675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.967 [2024-12-06 09:44:20.121175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58628 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58628 00:06:55.227 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58628 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58628 ']' 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58628 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58628 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.797 killing process with pid 58628 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58628' 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58628 00:06:55.797 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58628 00:06:56.056 00:06:56.057 real 0m1.440s 00:06:56.057 user 0m1.413s 00:06:56.057 sys 0m0.548s 00:06:56.057 09:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.057 09:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.057 ************************************ 00:06:56.057 END TEST default_locks_via_rpc 00:06:56.057 ************************************ 00:06:56.057 09:44:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:56.057 09:44:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.057 09:44:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.057 09:44:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.057 ************************************ 00:06:56.057 START TEST non_locking_app_on_locked_coremask 00:06:56.057 ************************************ 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58677 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58677 /var/tmp/spdk.sock 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58677 ']' 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.057 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.057 [2024-12-06 09:44:21.320180] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:56.057 [2024-12-06 09:44:21.320302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58677 ] 00:06:56.316 [2024-12-06 09:44:21.459312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.316 [2024-12-06 09:44:21.501332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.316 [2024-12-06 09:44:21.571047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58680 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58680 /var/tmp/spdk2.sock 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58680 ']' 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.576 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.577 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.577 09:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.837 [2024-12-06 09:44:21.846722] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:56.837 [2024-12-06 09:44:21.846828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58680 ] 00:06:56.837 [2024-12-06 09:44:22.012871] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.837 [2024-12-06 09:44:22.012936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.097 [2024-12-06 09:44:22.135374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.097 [2024-12-06 09:44:22.281903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.666 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.666 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:57.666 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58677 00:06:57.666 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58677 00:06:57.666 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.604 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58677 00:06:58.604 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58677 ']' 00:06:58.604 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58677 00:06:58.604 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:58.605 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.605 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58677 00:06:58.605 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.605 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.605 killing process with pid 58677 00:06:58.605 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58677' 00:06:58.605 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58677 00:06:58.605 09:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58677 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58680 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58680 ']' 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58680 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58680 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.569 killing process with pid 58680 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58680' 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58680 00:06:59.569 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58680 00:06:59.828 00:06:59.829 real 0m3.676s 00:06:59.829 user 0m4.065s 00:06:59.829 sys 0m1.138s 00:06:59.829 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.829 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.829 ************************************ 00:06:59.829 END TEST non_locking_app_on_locked_coremask 00:06:59.829 ************************************ 00:06:59.829 09:44:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:59.829 09:44:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.829 09:44:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.829 09:44:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.829 ************************************ 00:06:59.829 START TEST locking_app_on_unlocked_coremask 00:06:59.829 ************************************ 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58753 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58753 /var/tmp/spdk.sock 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58753 ']' 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.829 09:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.829 [2024-12-06 09:44:25.071365] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:59.829 [2024-12-06 09:44:25.071507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58753 ] 00:07:00.087 [2024-12-06 09:44:25.218037] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.087 [2024-12-06 09:44:25.218105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.087 [2024-12-06 09:44:25.275048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.087 [2024-12-06 09:44:25.343987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58761 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58761 /var/tmp/spdk2.sock 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58761 ']' 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.345 09:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.345 [2024-12-06 09:44:25.615010] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:00.345 [2024-12-06 09:44:25.615157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58761 ] 00:07:00.603 [2024-12-06 09:44:25.769954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.862 [2024-12-06 09:44:25.904279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.862 [2024-12-06 09:44:26.056580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.429 09:44:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.429 09:44:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:01.429 09:44:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58761 00:07:01.429 09:44:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58761 00:07:01.429 09:44:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58753 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58753 ']' 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58753 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58753 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.364 killing process with pid 58753 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58753' 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58753 00:07:02.364 09:44:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58753 00:07:02.932 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58761 00:07:02.932 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58761 ']' 00:07:02.932 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58761 00:07:02.932 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:02.932 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.932 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58761 00:07:03.190 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.190 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.190 killing process with pid 58761 00:07:03.190 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58761' 00:07:03.190 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58761 00:07:03.190 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58761 00:07:03.480 00:07:03.481 real 0m3.619s 00:07:03.481 user 0m3.893s 00:07:03.481 sys 0m1.155s 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.481 ************************************ 00:07:03.481 END TEST locking_app_on_unlocked_coremask 00:07:03.481 ************************************ 00:07:03.481 09:44:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:03.481 09:44:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.481 09:44:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.481 09:44:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.481 ************************************ 00:07:03.481 START TEST locking_app_on_locked_coremask 00:07:03.481 ************************************ 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58828 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58828 /var/tmp/spdk.sock 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58828 ']' 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.481 09:44:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.481 [2024-12-06 09:44:28.749330] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:03.481 [2024-12-06 09:44:28.749454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58828 ] 00:07:03.739 [2024-12-06 09:44:28.894326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.739 [2024-12-06 09:44:28.954454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.996 [2024-12-06 09:44:29.032177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58844 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58844 /var/tmp/spdk2.sock 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58844 /var/tmp/spdk2.sock 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58844 /var/tmp/spdk2.sock 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58844 ']' 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.563 09:44:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.563 [2024-12-06 09:44:29.793878] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:04.563 [2024-12-06 09:44:29.794003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:07:04.821 [2024-12-06 09:44:29.956810] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58828 has claimed it. 00:07:04.821 [2024-12-06 09:44:29.956909] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.388 ERROR: process (pid: 58844) is no longer running 00:07:05.388 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58844) - No such process 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58828 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.388 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58828 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58828 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58828 ']' 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58828 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58828 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.647 killing process with pid 58828 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58828' 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58828 00:07:05.647 09:44:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58828 00:07:06.216 00:07:06.216 real 0m2.575s 00:07:06.216 user 0m2.987s 00:07:06.216 sys 0m0.635s 00:07:06.216 09:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.216 09:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.216 ************************************ 00:07:06.216 END TEST locking_app_on_locked_coremask 00:07:06.216 ************************************ 00:07:06.216 09:44:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:06.216 09:44:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.216 09:44:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.216 09:44:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.216 ************************************ 00:07:06.216 START TEST locking_overlapped_coremask 00:07:06.216 ************************************ 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58890 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58890 /var/tmp/spdk.sock 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58890 ']' 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.216 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.216 [2024-12-06 09:44:31.361148] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:06.216 [2024-12-06 09:44:31.361263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58890 ] 00:07:06.475 [2024-12-06 09:44:31.503498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.475 [2024-12-06 09:44:31.567258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.475 [2024-12-06 09:44:31.567413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.475 [2024-12-06 09:44:31.567417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.475 [2024-12-06 09:44:31.641946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58900 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58900 /var/tmp/spdk2.sock 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58900 /var/tmp/spdk2.sock 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58900 /var/tmp/spdk2.sock 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58900 ']' 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.735 09:44:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.735 [2024-12-06 09:44:31.936708] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:06.735 [2024-12-06 09:44:31.937476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58900 ] 00:07:06.994 [2024-12-06 09:44:32.101294] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58890 has claimed it. 00:07:06.994 [2024-12-06 09:44:32.101362] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.564 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58900) - No such process 00:07:07.564 ERROR: process (pid: 58900) is no longer running 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58890 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58890 ']' 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58890 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58890 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58890' 00:07:07.564 killing process with pid 58890 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58890 00:07:07.564 09:44:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58890 00:07:08.134 00:07:08.134 real 0m1.810s 00:07:08.134 user 0m4.946s 00:07:08.134 sys 0m0.445s 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.134 ************************************ 00:07:08.134 END TEST locking_overlapped_coremask 00:07:08.134 ************************************ 00:07:08.134 09:44:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:08.134 09:44:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.134 09:44:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.134 09:44:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.134 ************************************ 00:07:08.134 START TEST locking_overlapped_coremask_via_rpc 00:07:08.134 ************************************ 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58946 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58946 /var/tmp/spdk.sock 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58946 ']' 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.134 09:44:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.134 [2024-12-06 09:44:33.236185] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:08.134 [2024-12-06 09:44:33.236305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58946 ] 00:07:08.134 [2024-12-06 09:44:33.382188] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.134 [2024-12-06 09:44:33.382289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.394 [2024-12-06 09:44:33.469543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.394 [2024-12-06 09:44:33.469710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.394 [2024-12-06 09:44:33.469714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.394 [2024-12-06 09:44:33.547203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58964 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58964 /var/tmp/spdk2.sock 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58964 ']' 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.333 09:44:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.333 [2024-12-06 09:44:34.332235] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:09.333 [2024-12-06 09:44:34.332715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:07:09.333 [2024-12-06 09:44:34.496429] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.333 [2024-12-06 09:44:34.496474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.592 [2024-12-06 09:44:34.645661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.592 [2024-12-06 09:44:34.645816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:09.592 [2024-12-06 09:44:34.645818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.592 [2024-12-06 09:44:34.804513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.160 [2024-12-06 09:44:35.411769] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58946 has claimed it. 00:07:10.160 request: 00:07:10.160 { 00:07:10.160 "method": "framework_enable_cpumask_locks", 00:07:10.160 "req_id": 1 00:07:10.160 } 00:07:10.160 Got JSON-RPC error response 00:07:10.160 response: 00:07:10.160 { 00:07:10.160 "code": -32603, 00:07:10.160 "message": "Failed to claim CPU core: 2" 00:07:10.160 } 00:07:10.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58946 /var/tmp/spdk.sock 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58946 ']' 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.160 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58964 /var/tmp/spdk2.sock 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58964 ']' 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.726 09:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.984 09:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.984 ************************************ 00:07:10.984 END TEST locking_overlapped_coremask_via_rpc 00:07:10.984 ************************************ 00:07:10.984 09:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.984 09:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:10.984 09:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.984 09:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.984 09:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.984 00:07:10.984 real 0m2.856s 00:07:10.984 user 0m1.567s 00:07:10.984 sys 0m0.222s 00:07:10.984 09:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.984 09:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.984 09:44:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:10.984 09:44:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58946 ]] 00:07:10.984 09:44:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58946 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58946 ']' 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58946 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58946 00:07:10.984 killing process with pid 58946 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58946' 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58946 00:07:10.984 09:44:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58946 00:07:11.551 09:44:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58964 ]] 00:07:11.551 09:44:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58964 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58964 ']' 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58964 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58964 00:07:11.551 killing process with pid 58964 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58964' 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58964 00:07:11.551 09:44:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58964 00:07:12.179 09:44:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:12.179 09:44:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:12.179 09:44:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58946 ]] 00:07:12.179 09:44:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58946 00:07:12.179 09:44:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58946 ']' 00:07:12.179 09:44:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58946 00:07:12.179 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58946) - No such process 00:07:12.179 09:44:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58946 is not found' 00:07:12.179 Process with pid 58946 is not found 00:07:12.179 09:44:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58964 ]] 00:07:12.179 09:44:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58964 00:07:12.179 09:44:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58964 ']' 00:07:12.179 09:44:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58964 00:07:12.179 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58964) - No such process 00:07:12.179 09:44:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58964 is not found' 00:07:12.179 Process with pid 58964 is not found 00:07:12.179 09:44:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:12.179 00:07:12.179 real 0m19.081s 00:07:12.179 user 0m34.538s 00:07:12.179 sys 0m5.707s 00:07:12.179 09:44:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.179 ************************************ 00:07:12.179 09:44:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.179 END TEST cpu_locks 00:07:12.179 ************************************ 00:07:12.179 00:07:12.179 real 0m46.541s 00:07:12.179 user 1m30.789s 00:07:12.179 sys 0m9.466s 00:07:12.179 09:44:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.179 09:44:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.179 ************************************ 00:07:12.179 END TEST event 00:07:12.179 ************************************ 00:07:12.179 09:44:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:12.179 09:44:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.179 09:44:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.179 09:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:12.179 ************************************ 00:07:12.179 START TEST thread 00:07:12.179 ************************************ 00:07:12.179 09:44:37 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:12.179 * Looking for test storage... 00:07:12.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:12.179 09:44:37 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.179 09:44:37 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.179 09:44:37 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.179 09:44:37 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.179 09:44:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.179 09:44:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.179 09:44:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.179 09:44:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.179 09:44:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.179 09:44:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.179 09:44:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.179 09:44:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.179 09:44:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.179 09:44:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.179 09:44:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.179 09:44:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:12.179 09:44:37 thread -- scripts/common.sh@345 -- # : 1 00:07:12.179 09:44:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.179 09:44:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.179 09:44:37 thread -- scripts/common.sh@365 -- # decimal 1 00:07:12.179 09:44:37 thread -- scripts/common.sh@353 -- # local d=1 00:07:12.179 09:44:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.179 09:44:37 thread -- scripts/common.sh@355 -- # echo 1 00:07:12.179 09:44:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.179 09:44:37 thread -- scripts/common.sh@366 -- # decimal 2 00:07:12.179 09:44:37 thread -- scripts/common.sh@353 -- # local d=2 00:07:12.179 09:44:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.179 09:44:37 thread -- scripts/common.sh@355 -- # echo 2 00:07:12.179 09:44:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.179 09:44:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.179 09:44:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.179 09:44:37 thread -- scripts/common.sh@368 -- # return 0 00:07:12.450 09:44:37 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.450 09:44:37 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.450 --rc genhtml_branch_coverage=1 00:07:12.450 --rc genhtml_function_coverage=1 00:07:12.450 --rc genhtml_legend=1 00:07:12.450 --rc geninfo_all_blocks=1 00:07:12.450 --rc geninfo_unexecuted_blocks=1 00:07:12.450 00:07:12.450 ' 00:07:12.450 09:44:37 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.450 --rc genhtml_branch_coverage=1 00:07:12.450 --rc genhtml_function_coverage=1 00:07:12.450 --rc genhtml_legend=1 00:07:12.450 --rc geninfo_all_blocks=1 00:07:12.450 --rc geninfo_unexecuted_blocks=1 00:07:12.450 00:07:12.450 ' 00:07:12.450 09:44:37 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.450 --rc genhtml_branch_coverage=1 00:07:12.450 --rc genhtml_function_coverage=1 00:07:12.450 --rc genhtml_legend=1 00:07:12.450 --rc geninfo_all_blocks=1 00:07:12.450 --rc geninfo_unexecuted_blocks=1 00:07:12.450 00:07:12.450 ' 00:07:12.450 09:44:37 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.450 --rc genhtml_branch_coverage=1 00:07:12.450 --rc genhtml_function_coverage=1 00:07:12.450 --rc genhtml_legend=1 00:07:12.450 --rc geninfo_all_blocks=1 00:07:12.450 --rc geninfo_unexecuted_blocks=1 00:07:12.450 00:07:12.450 ' 00:07:12.450 09:44:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:12.450 09:44:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:12.450 09:44:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.450 09:44:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.450 ************************************ 00:07:12.450 START TEST thread_poller_perf 00:07:12.450 ************************************ 00:07:12.450 09:44:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:12.450 [2024-12-06 09:44:37.468474] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:12.451 [2024-12-06 09:44:37.469284] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59100 ] 00:07:12.451 [2024-12-06 09:44:37.621821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.451 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:12.451 [2024-12-06 09:44:37.690722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.830 [2024-12-06T09:44:39.102Z] ====================================== 00:07:13.830 [2024-12-06T09:44:39.102Z] busy:2215274708 (cyc) 00:07:13.830 [2024-12-06T09:44:39.102Z] total_run_count: 342000 00:07:13.830 [2024-12-06T09:44:39.102Z] tsc_hz: 2200000000 (cyc) 00:07:13.830 [2024-12-06T09:44:39.102Z] ====================================== 00:07:13.830 [2024-12-06T09:44:39.102Z] poller_cost: 6477 (cyc), 2944 (nsec) 00:07:13.830 00:07:13.830 real 0m1.312s 00:07:13.830 user 0m1.156s 00:07:13.830 sys 0m0.048s 00:07:13.830 09:44:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.830 ************************************ 00:07:13.830 END TEST thread_poller_perf 00:07:13.830 ************************************ 00:07:13.830 09:44:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.830 09:44:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.830 09:44:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:13.830 09:44:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.830 09:44:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.830 ************************************ 00:07:13.830 START TEST thread_poller_perf 00:07:13.830 ************************************ 00:07:13.830 09:44:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.830 [2024-12-06 09:44:38.833648] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:13.830 [2024-12-06 09:44:38.833753] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59135 ] 00:07:13.830 [2024-12-06 09:44:38.978506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.830 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:13.830 [2024-12-06 09:44:39.023245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.209 [2024-12-06T09:44:40.481Z] ====================================== 00:07:15.209 [2024-12-06T09:44:40.481Z] busy:2202174856 (cyc) 00:07:15.209 [2024-12-06T09:44:40.481Z] total_run_count: 4453000 00:07:15.209 [2024-12-06T09:44:40.481Z] tsc_hz: 2200000000 (cyc) 00:07:15.209 [2024-12-06T09:44:40.481Z] ====================================== 00:07:15.209 [2024-12-06T09:44:40.481Z] poller_cost: 494 (cyc), 224 (nsec) 00:07:15.209 00:07:15.209 real 0m1.254s 00:07:15.209 user 0m1.107s 00:07:15.209 sys 0m0.040s 00:07:15.209 09:44:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.209 ************************************ 00:07:15.209 END TEST thread_poller_perf 00:07:15.209 ************************************ 00:07:15.209 09:44:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.209 09:44:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:15.209 ************************************ 00:07:15.209 END TEST thread 00:07:15.209 ************************************ 00:07:15.209 00:07:15.209 real 0m2.846s 00:07:15.209 user 0m2.387s 00:07:15.209 sys 0m0.247s 00:07:15.209 09:44:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.209 09:44:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.209 09:44:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:15.209 09:44:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:15.209 09:44:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.209 09:44:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.209 09:44:40 -- common/autotest_common.sh@10 -- # set +x 00:07:15.209 ************************************ 00:07:15.209 START TEST app_cmdline 00:07:15.209 ************************************ 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:15.209 * Looking for test storage... 00:07:15.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.209 09:44:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.209 --rc genhtml_branch_coverage=1 00:07:15.209 --rc genhtml_function_coverage=1 00:07:15.209 --rc genhtml_legend=1 00:07:15.209 --rc geninfo_all_blocks=1 00:07:15.209 --rc geninfo_unexecuted_blocks=1 00:07:15.209 00:07:15.209 ' 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.209 --rc genhtml_branch_coverage=1 00:07:15.209 --rc genhtml_function_coverage=1 00:07:15.209 --rc genhtml_legend=1 00:07:15.209 --rc geninfo_all_blocks=1 00:07:15.209 --rc geninfo_unexecuted_blocks=1 00:07:15.209 00:07:15.209 ' 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.209 --rc genhtml_branch_coverage=1 00:07:15.209 --rc genhtml_function_coverage=1 00:07:15.209 --rc genhtml_legend=1 00:07:15.209 --rc geninfo_all_blocks=1 00:07:15.209 --rc geninfo_unexecuted_blocks=1 00:07:15.209 00:07:15.209 ' 00:07:15.209 09:44:40 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.210 --rc genhtml_branch_coverage=1 00:07:15.210 --rc genhtml_function_coverage=1 00:07:15.210 --rc genhtml_legend=1 00:07:15.210 --rc geninfo_all_blocks=1 00:07:15.210 --rc geninfo_unexecuted_blocks=1 00:07:15.210 00:07:15.210 ' 00:07:15.210 09:44:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.210 09:44:40 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.210 09:44:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59218 00:07:15.210 09:44:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59218 00:07:15.210 09:44:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59218 ']' 00:07:15.210 09:44:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.210 09:44:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.210 09:44:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.210 09:44:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.210 09:44:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.210 [2024-12-06 09:44:40.425406] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:15.210 [2024-12-06 09:44:40.425749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59218 ] 00:07:15.469 [2024-12-06 09:44:40.565879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.469 [2024-12-06 09:44:40.617246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.469 [2024-12-06 09:44:40.684042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.407 09:44:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.407 09:44:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:16.407 { 00:07:16.407 "version": "SPDK v25.01-pre git sha1 eec618948", 00:07:16.407 "fields": { 00:07:16.407 "major": 25, 00:07:16.407 "minor": 1, 00:07:16.407 "patch": 0, 00:07:16.407 "suffix": "-pre", 00:07:16.407 "commit": "eec618948" 00:07:16.407 } 00:07:16.407 } 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:16.407 09:44:41 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.407 09:44:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 09:44:41 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.407 09:44:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.408 09:44:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.408 09:44:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:16.408 09:44:41 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.977 request: 00:07:16.977 { 00:07:16.977 "method": "env_dpdk_get_mem_stats", 00:07:16.977 "req_id": 1 00:07:16.977 } 00:07:16.977 Got JSON-RPC error response 00:07:16.977 response: 00:07:16.977 { 00:07:16.977 "code": -32601, 00:07:16.977 "message": "Method not found" 00:07:16.977 } 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.977 09:44:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59218 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59218 ']' 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59218 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59218 00:07:16.977 killing process with pid 59218 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59218' 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 59218 00:07:16.977 09:44:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 59218 00:07:17.236 00:07:17.236 real 0m2.266s 00:07:17.236 user 0m2.781s 00:07:17.236 sys 0m0.538s 00:07:17.236 ************************************ 00:07:17.236 END TEST app_cmdline 00:07:17.236 ************************************ 00:07:17.236 09:44:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.236 09:44:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.236 09:44:42 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:17.236 09:44:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.236 09:44:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.236 09:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.236 ************************************ 00:07:17.236 START TEST version 00:07:17.236 ************************************ 00:07:17.236 09:44:42 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:17.495 * Looking for test storage... 00:07:17.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:17.495 09:44:42 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:17.495 09:44:42 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:17.495 09:44:42 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:17.495 09:44:42 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:17.495 09:44:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.495 09:44:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.495 09:44:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.495 09:44:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.495 09:44:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.495 09:44:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.496 09:44:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.496 09:44:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.496 09:44:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.496 09:44:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.496 09:44:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.496 09:44:42 version -- scripts/common.sh@344 -- # case "$op" in 00:07:17.496 09:44:42 version -- scripts/common.sh@345 -- # : 1 00:07:17.496 09:44:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.496 09:44:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.496 09:44:42 version -- scripts/common.sh@365 -- # decimal 1 00:07:17.496 09:44:42 version -- scripts/common.sh@353 -- # local d=1 00:07:17.496 09:44:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.496 09:44:42 version -- scripts/common.sh@355 -- # echo 1 00:07:17.496 09:44:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.496 09:44:42 version -- scripts/common.sh@366 -- # decimal 2 00:07:17.496 09:44:42 version -- scripts/common.sh@353 -- # local d=2 00:07:17.496 09:44:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.496 09:44:42 version -- scripts/common.sh@355 -- # echo 2 00:07:17.496 09:44:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.496 09:44:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.496 09:44:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.496 09:44:42 version -- scripts/common.sh@368 -- # return 0 00:07:17.496 09:44:42 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.496 09:44:42 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:17.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.496 --rc genhtml_branch_coverage=1 00:07:17.496 --rc genhtml_function_coverage=1 00:07:17.496 --rc genhtml_legend=1 00:07:17.496 --rc geninfo_all_blocks=1 00:07:17.496 --rc geninfo_unexecuted_blocks=1 00:07:17.496 00:07:17.496 ' 00:07:17.496 09:44:42 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:17.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.496 --rc genhtml_branch_coverage=1 00:07:17.496 --rc genhtml_function_coverage=1 00:07:17.496 --rc genhtml_legend=1 00:07:17.496 --rc geninfo_all_blocks=1 00:07:17.496 --rc geninfo_unexecuted_blocks=1 00:07:17.496 00:07:17.496 ' 00:07:17.496 09:44:42 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:17.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.496 --rc genhtml_branch_coverage=1 00:07:17.496 --rc genhtml_function_coverage=1 00:07:17.496 --rc genhtml_legend=1 00:07:17.496 --rc geninfo_all_blocks=1 00:07:17.496 --rc geninfo_unexecuted_blocks=1 00:07:17.496 00:07:17.496 ' 00:07:17.496 09:44:42 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:17.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.496 --rc genhtml_branch_coverage=1 00:07:17.496 --rc genhtml_function_coverage=1 00:07:17.496 --rc genhtml_legend=1 00:07:17.496 --rc geninfo_all_blocks=1 00:07:17.496 --rc geninfo_unexecuted_blocks=1 00:07:17.496 00:07:17.496 ' 00:07:17.496 09:44:42 version -- app/version.sh@17 -- # get_header_version major 00:07:17.496 09:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:17.496 09:44:42 version -- app/version.sh@14 -- # cut -f2 00:07:17.496 09:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.496 09:44:42 version -- app/version.sh@17 -- # major=25 00:07:17.496 09:44:42 version -- app/version.sh@18 -- # get_header_version minor 00:07:17.496 09:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:17.496 09:44:42 version -- app/version.sh@14 -- # cut -f2 00:07:17.496 09:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.496 09:44:42 version -- app/version.sh@18 -- # minor=1 00:07:17.496 09:44:42 version -- app/version.sh@19 -- # get_header_version patch 00:07:17.496 09:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:17.496 09:44:42 version -- app/version.sh@14 -- # cut -f2 00:07:17.496 09:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.496 09:44:42 version -- app/version.sh@19 -- # patch=0 00:07:17.496 09:44:42 version -- app/version.sh@20 -- # get_header_version suffix 00:07:17.496 09:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:17.496 09:44:42 version -- app/version.sh@14 -- # cut -f2 00:07:17.496 09:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.496 09:44:42 version -- app/version.sh@20 -- # suffix=-pre 00:07:17.496 09:44:42 version -- app/version.sh@22 -- # version=25.1 00:07:17.496 09:44:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:17.496 09:44:42 version -- app/version.sh@28 -- # version=25.1rc0 00:07:17.496 09:44:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:17.496 09:44:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:17.496 09:44:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:17.496 09:44:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:17.496 00:07:17.496 real 0m0.262s 00:07:17.496 user 0m0.174s 00:07:17.496 sys 0m0.128s 00:07:17.496 ************************************ 00:07:17.496 END TEST version 00:07:17.496 ************************************ 00:07:17.496 09:44:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.496 09:44:42 version -- common/autotest_common.sh@10 -- # set +x 00:07:17.756 09:44:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:17.756 09:44:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:17.756 09:44:42 -- spdk/autotest.sh@194 -- # uname -s 00:07:17.756 09:44:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:17.756 09:44:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:17.756 09:44:42 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:17.756 09:44:42 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:17.756 09:44:42 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:17.756 09:44:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.756 09:44:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.756 09:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.756 ************************************ 00:07:17.756 START TEST spdk_dd 00:07:17.756 ************************************ 00:07:17.756 09:44:42 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:17.756 * Looking for test storage... 00:07:17.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:17.756 09:44:42 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:17.756 09:44:42 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:07:17.756 09:44:42 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:17.756 09:44:43 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:17.756 09:44:43 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:18.016 09:44:43 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.016 09:44:43 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.016 --rc genhtml_branch_coverage=1 00:07:18.016 --rc genhtml_function_coverage=1 00:07:18.016 --rc genhtml_legend=1 00:07:18.016 --rc geninfo_all_blocks=1 00:07:18.016 --rc geninfo_unexecuted_blocks=1 00:07:18.016 00:07:18.016 ' 00:07:18.016 09:44:43 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.016 --rc genhtml_branch_coverage=1 00:07:18.016 --rc genhtml_function_coverage=1 00:07:18.016 --rc genhtml_legend=1 00:07:18.016 --rc geninfo_all_blocks=1 00:07:18.016 --rc geninfo_unexecuted_blocks=1 00:07:18.016 00:07:18.016 ' 00:07:18.016 09:44:43 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.016 --rc genhtml_branch_coverage=1 00:07:18.016 --rc genhtml_function_coverage=1 00:07:18.016 --rc genhtml_legend=1 00:07:18.016 --rc geninfo_all_blocks=1 00:07:18.016 --rc geninfo_unexecuted_blocks=1 00:07:18.016 00:07:18.016 ' 00:07:18.016 09:44:43 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.016 --rc genhtml_branch_coverage=1 00:07:18.016 --rc genhtml_function_coverage=1 00:07:18.016 --rc genhtml_legend=1 00:07:18.016 --rc geninfo_all_blocks=1 00:07:18.016 --rc geninfo_unexecuted_blocks=1 00:07:18.016 00:07:18.016 ' 00:07:18.016 09:44:43 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.016 09:44:43 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.016 09:44:43 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.016 09:44:43 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.016 09:44:43 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.016 09:44:43 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:18.016 09:44:43 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.016 09:44:43 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:18.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:18.277 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:18.277 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:18.277 09:44:43 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:18.277 09:44:43 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:18.277 09:44:43 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:18.277 09:44:43 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.277 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:18.278 * spdk_dd linked to liburing 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:18.278 09:44:43 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:18.278 09:44:43 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:18.279 09:44:43 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:18.279 09:44:43 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:18.279 09:44:43 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:18.279 09:44:43 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:18.279 09:44:43 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:18.279 09:44:43 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:18.279 09:44:43 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:18.279 09:44:43 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:18.279 09:44:43 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.279 09:44:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:18.279 ************************************ 00:07:18.279 START TEST spdk_dd_basic_rw 00:07:18.279 ************************************ 00:07:18.279 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:18.539 * Looking for test storage... 00:07:18.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.539 --rc genhtml_branch_coverage=1 00:07:18.539 --rc genhtml_function_coverage=1 00:07:18.539 --rc genhtml_legend=1 00:07:18.539 --rc geninfo_all_blocks=1 00:07:18.539 --rc geninfo_unexecuted_blocks=1 00:07:18.539 00:07:18.539 ' 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.539 --rc genhtml_branch_coverage=1 00:07:18.539 --rc genhtml_function_coverage=1 00:07:18.539 --rc genhtml_legend=1 00:07:18.539 --rc geninfo_all_blocks=1 00:07:18.539 --rc geninfo_unexecuted_blocks=1 00:07:18.539 00:07:18.539 ' 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.539 --rc genhtml_branch_coverage=1 00:07:18.539 --rc genhtml_function_coverage=1 00:07:18.539 --rc genhtml_legend=1 00:07:18.539 --rc geninfo_all_blocks=1 00:07:18.539 --rc geninfo_unexecuted_blocks=1 00:07:18.539 00:07:18.539 ' 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.539 --rc genhtml_branch_coverage=1 00:07:18.539 --rc genhtml_function_coverage=1 00:07:18.539 --rc genhtml_legend=1 00:07:18.539 --rc geninfo_all_blocks=1 00:07:18.539 --rc geninfo_unexecuted_blocks=1 00:07:18.539 00:07:18.539 ' 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:18.539 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:18.801 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:18.801 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.802 ************************************ 00:07:18.802 START TEST dd_bs_lt_native_bs 00:07:18.802 ************************************ 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.802 09:44:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:18.802 { 00:07:18.802 "subsystems": [ 00:07:18.802 { 00:07:18.802 "subsystem": "bdev", 00:07:18.802 "config": [ 00:07:18.802 { 00:07:18.802 "params": { 00:07:18.802 "trtype": "pcie", 00:07:18.802 "traddr": "0000:00:10.0", 00:07:18.802 "name": "Nvme0" 00:07:18.802 }, 00:07:18.802 "method": "bdev_nvme_attach_controller" 00:07:18.802 }, 00:07:18.802 { 00:07:18.802 "method": "bdev_wait_for_examine" 00:07:18.802 } 00:07:18.802 ] 00:07:18.802 } 00:07:18.802 ] 00:07:18.802 } 00:07:18.802 [2024-12-06 09:44:43.993689] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:18.802 [2024-12-06 09:44:43.993922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59568 ] 00:07:19.067 [2024-12-06 09:44:44.142900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.067 [2024-12-06 09:44:44.196826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.067 [2024-12-06 09:44:44.259435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.329 [2024-12-06 09:44:44.373029] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:19.329 [2024-12-06 09:44:44.373107] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.329 [2024-12-06 09:44:44.504361] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:19.329 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:19.329 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.329 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:19.329 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:19.329 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:19.329 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.329 00:07:19.329 real 0m0.640s 00:07:19.329 user 0m0.422s 00:07:19.329 sys 0m0.164s 00:07:19.329 ************************************ 00:07:19.329 END TEST dd_bs_lt_native_bs 00:07:19.329 ************************************ 00:07:19.329 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.329 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.587 ************************************ 00:07:19.587 START TEST dd_rw 00:07:19.587 ************************************ 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:19.587 09:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.170 09:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:20.170 09:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:20.170 09:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:20.170 09:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.170 { 00:07:20.170 "subsystems": [ 00:07:20.170 { 00:07:20.170 "subsystem": "bdev", 00:07:20.170 "config": [ 00:07:20.170 { 00:07:20.170 "params": { 00:07:20.170 "trtype": "pcie", 00:07:20.170 "traddr": "0000:00:10.0", 00:07:20.170 "name": "Nvme0" 00:07:20.170 }, 00:07:20.170 "method": "bdev_nvme_attach_controller" 00:07:20.170 }, 00:07:20.170 { 00:07:20.170 "method": "bdev_wait_for_examine" 00:07:20.170 } 00:07:20.170 ] 00:07:20.170 } 00:07:20.170 ] 00:07:20.170 } 00:07:20.170 [2024-12-06 09:44:45.346070] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:20.170 [2024-12-06 09:44:45.346544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59600 ] 00:07:20.434 [2024-12-06 09:44:45.496064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.434 [2024-12-06 09:44:45.537644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.434 [2024-12-06 09:44:45.591937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.434  [2024-12-06T09:44:45.965Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:20.693 00:07:20.693 09:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:20.693 09:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:20.693 09:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:20.693 09:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.693 [2024-12-06 09:44:45.921479] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:20.693 [2024-12-06 09:44:45.921738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:07:20.693 { 00:07:20.693 "subsystems": [ 00:07:20.693 { 00:07:20.693 "subsystem": "bdev", 00:07:20.693 "config": [ 00:07:20.693 { 00:07:20.693 "params": { 00:07:20.693 "trtype": "pcie", 00:07:20.693 "traddr": "0000:00:10.0", 00:07:20.693 "name": "Nvme0" 00:07:20.693 }, 00:07:20.693 "method": "bdev_nvme_attach_controller" 00:07:20.694 }, 00:07:20.694 { 00:07:20.694 "method": "bdev_wait_for_examine" 00:07:20.694 } 00:07:20.694 ] 00:07:20.694 } 00:07:20.694 ] 00:07:20.694 } 00:07:20.952 [2024-12-06 09:44:46.061039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.952 [2024-12-06 09:44:46.108403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.952 [2024-12-06 09:44:46.161990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.211  [2024-12-06T09:44:46.483Z] Copying: 60/60 [kB] (average 14 MBps) 00:07:21.211 00:07:21.211 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:21.470 09:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.470 { 00:07:21.470 "subsystems": [ 00:07:21.470 { 00:07:21.470 "subsystem": "bdev", 00:07:21.470 "config": [ 00:07:21.470 { 00:07:21.470 "params": { 00:07:21.470 "trtype": "pcie", 00:07:21.470 "traddr": "0000:00:10.0", 00:07:21.470 "name": "Nvme0" 00:07:21.470 }, 00:07:21.470 "method": "bdev_nvme_attach_controller" 00:07:21.470 }, 00:07:21.470 { 00:07:21.470 "method": "bdev_wait_for_examine" 00:07:21.470 } 00:07:21.470 ] 00:07:21.470 } 00:07:21.470 ] 00:07:21.470 } 00:07:21.470 [2024-12-06 09:44:46.536759] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:21.470 [2024-12-06 09:44:46.537219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59635 ] 00:07:21.470 [2024-12-06 09:44:46.678673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.470 [2024-12-06 09:44:46.718717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.730 [2024-12-06 09:44:46.771908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.730  [2024-12-06T09:44:47.261Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:21.989 00:07:21.989 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:21.989 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:21.989 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:21.989 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:21.989 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:21.989 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:21.990 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:22.557 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:22.557 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.557 09:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 { 00:07:22.557 "subsystems": [ 00:07:22.557 { 00:07:22.557 "subsystem": "bdev", 00:07:22.557 "config": [ 00:07:22.557 { 00:07:22.557 "params": { 00:07:22.557 "trtype": "pcie", 00:07:22.557 "traddr": "0000:00:10.0", 00:07:22.557 "name": "Nvme0" 00:07:22.557 }, 00:07:22.557 "method": "bdev_nvme_attach_controller" 00:07:22.557 }, 00:07:22.557 { 00:07:22.557 "method": "bdev_wait_for_examine" 00:07:22.557 } 00:07:22.557 ] 00:07:22.557 } 00:07:22.557 ] 00:07:22.557 } 00:07:22.557 [2024-12-06 09:44:47.677316] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:22.557 [2024-12-06 09:44:47.677439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59654 ] 00:07:22.557 [2024-12-06 09:44:47.827279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.817 [2024-12-06 09:44:47.868629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.817 [2024-12-06 09:44:47.921754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.817  [2024-12-06T09:44:48.348Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:23.076 00:07:23.076 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:23.076 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:23.076 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:23.076 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:23.076 [2024-12-06 09:44:48.290300] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:23.076 [2024-12-06 09:44:48.290439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59667 ] 00:07:23.076 { 00:07:23.076 "subsystems": [ 00:07:23.076 { 00:07:23.076 "subsystem": "bdev", 00:07:23.076 "config": [ 00:07:23.076 { 00:07:23.076 "params": { 00:07:23.076 "trtype": "pcie", 00:07:23.076 "traddr": "0000:00:10.0", 00:07:23.076 "name": "Nvme0" 00:07:23.076 }, 00:07:23.076 "method": "bdev_nvme_attach_controller" 00:07:23.076 }, 00:07:23.076 { 00:07:23.076 "method": "bdev_wait_for_examine" 00:07:23.076 } 00:07:23.076 ] 00:07:23.076 } 00:07:23.076 ] 00:07:23.076 } 00:07:23.336 [2024-12-06 09:44:48.438438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.337 [2024-12-06 09:44:48.479692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.337 [2024-12-06 09:44:48.537037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.596  [2024-12-06T09:44:48.868Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:23.596 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:23.597 09:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:23.856 [2024-12-06 09:44:48.877778] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:23.856 [2024-12-06 09:44:48.877870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59683 ] 00:07:23.856 { 00:07:23.856 "subsystems": [ 00:07:23.856 { 00:07:23.856 "subsystem": "bdev", 00:07:23.856 "config": [ 00:07:23.856 { 00:07:23.856 "params": { 00:07:23.856 "trtype": "pcie", 00:07:23.856 "traddr": "0000:00:10.0", 00:07:23.856 "name": "Nvme0" 00:07:23.856 }, 00:07:23.856 "method": "bdev_nvme_attach_controller" 00:07:23.856 }, 00:07:23.856 { 00:07:23.856 "method": "bdev_wait_for_examine" 00:07:23.856 } 00:07:23.856 ] 00:07:23.856 } 00:07:23.856 ] 00:07:23.856 } 00:07:23.856 [2024-12-06 09:44:49.008830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.856 [2024-12-06 09:44:49.052280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.856 [2024-12-06 09:44:49.106017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.115  [2024-12-06T09:44:49.646Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:24.374 00:07:24.374 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:24.374 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:24.374 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:24.374 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:24.374 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:24.374 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:24.374 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:24.374 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.633 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:24.633 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:24.633 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.633 09:44:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.891 { 00:07:24.891 "subsystems": [ 00:07:24.891 { 00:07:24.891 "subsystem": "bdev", 00:07:24.891 "config": [ 00:07:24.891 { 00:07:24.891 "params": { 00:07:24.891 "trtype": "pcie", 00:07:24.891 "traddr": "0000:00:10.0", 00:07:24.891 "name": "Nvme0" 00:07:24.891 }, 00:07:24.891 "method": "bdev_nvme_attach_controller" 00:07:24.891 }, 00:07:24.891 { 00:07:24.891 "method": "bdev_wait_for_examine" 00:07:24.891 } 00:07:24.891 ] 00:07:24.891 } 00:07:24.891 ] 00:07:24.891 } 00:07:24.891 [2024-12-06 09:44:49.956160] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:24.891 [2024-12-06 09:44:49.956476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59702 ] 00:07:24.891 [2024-12-06 09:44:50.102064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.891 [2024-12-06 09:44:50.143083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.150 [2024-12-06 09:44:50.197072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.150  [2024-12-06T09:44:50.681Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:25.409 00:07:25.409 09:44:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:25.409 09:44:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:25.409 09:44:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:25.409 09:44:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.409 [2024-12-06 09:44:50.562014] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:25.409 [2024-12-06 09:44:50.562124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59715 ] 00:07:25.409 { 00:07:25.409 "subsystems": [ 00:07:25.409 { 00:07:25.409 "subsystem": "bdev", 00:07:25.409 "config": [ 00:07:25.409 { 00:07:25.409 "params": { 00:07:25.409 "trtype": "pcie", 00:07:25.409 "traddr": "0000:00:10.0", 00:07:25.409 "name": "Nvme0" 00:07:25.409 }, 00:07:25.409 "method": "bdev_nvme_attach_controller" 00:07:25.409 }, 00:07:25.409 { 00:07:25.409 "method": "bdev_wait_for_examine" 00:07:25.409 } 00:07:25.409 ] 00:07:25.409 } 00:07:25.409 ] 00:07:25.409 } 00:07:25.668 [2024-12-06 09:44:50.705960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.668 [2024-12-06 09:44:50.745613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.668 [2024-12-06 09:44:50.797989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.668  [2024-12-06T09:44:51.198Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:25.926 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:25.926 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.926 { 00:07:25.926 "subsystems": [ 00:07:25.926 { 00:07:25.926 "subsystem": "bdev", 00:07:25.926 "config": [ 00:07:25.926 { 00:07:25.926 "params": { 00:07:25.926 "trtype": "pcie", 00:07:25.926 "traddr": "0000:00:10.0", 00:07:25.926 "name": "Nvme0" 00:07:25.926 }, 00:07:25.926 "method": "bdev_nvme_attach_controller" 00:07:25.926 }, 00:07:25.926 { 00:07:25.926 "method": "bdev_wait_for_examine" 00:07:25.926 } 00:07:25.926 ] 00:07:25.926 } 00:07:25.926 ] 00:07:25.926 } 00:07:25.926 [2024-12-06 09:44:51.151030] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:25.926 [2024-12-06 09:44:51.151136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:07:26.184 [2024-12-06 09:44:51.295474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.184 [2024-12-06 09:44:51.345821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.184 [2024-12-06 09:44:51.403012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.443  [2024-12-06T09:44:51.715Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:26.443 00:07:26.443 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:26.443 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:26.443 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:26.443 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:26.443 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:26.443 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:26.443 09:44:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.009 09:44:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:27.009 09:44:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:27.009 09:44:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.009 09:44:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.009 { 00:07:27.009 "subsystems": [ 00:07:27.009 { 00:07:27.009 "subsystem": "bdev", 00:07:27.009 "config": [ 00:07:27.009 { 00:07:27.009 "params": { 00:07:27.009 "trtype": "pcie", 00:07:27.009 "traddr": "0000:00:10.0", 00:07:27.009 "name": "Nvme0" 00:07:27.009 }, 00:07:27.009 "method": "bdev_nvme_attach_controller" 00:07:27.009 }, 00:07:27.009 { 00:07:27.009 "method": "bdev_wait_for_examine" 00:07:27.009 } 00:07:27.009 ] 00:07:27.009 } 00:07:27.009 ] 00:07:27.009 } 00:07:27.009 [2024-12-06 09:44:52.273172] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:27.009 [2024-12-06 09:44:52.273293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59750 ] 00:07:27.268 [2024-12-06 09:44:52.422507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.268 [2024-12-06 09:44:52.469824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.268 [2024-12-06 09:44:52.524215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.527  [2024-12-06T09:44:53.058Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:27.786 00:07:27.786 09:44:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:27.786 09:44:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:27.786 09:44:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.786 09:44:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.786 [2024-12-06 09:44:52.856622] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:27.786 [2024-12-06 09:44:52.856877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59769 ] 00:07:27.786 { 00:07:27.786 "subsystems": [ 00:07:27.786 { 00:07:27.786 "subsystem": "bdev", 00:07:27.786 "config": [ 00:07:27.786 { 00:07:27.786 "params": { 00:07:27.786 "trtype": "pcie", 00:07:27.786 "traddr": "0000:00:10.0", 00:07:27.786 "name": "Nvme0" 00:07:27.786 }, 00:07:27.786 "method": "bdev_nvme_attach_controller" 00:07:27.786 }, 00:07:27.786 { 00:07:27.786 "method": "bdev_wait_for_examine" 00:07:27.786 } 00:07:27.786 ] 00:07:27.786 } 00:07:27.786 ] 00:07:27.786 } 00:07:27.786 [2024-12-06 09:44:52.993850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.786 [2024-12-06 09:44:53.043528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.045 [2024-12-06 09:44:53.098426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.045  [2024-12-06T09:44:53.576Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:28.304 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.304 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.304 [2024-12-06 09:44:53.457616] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:28.304 [2024-12-06 09:44:53.457753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59784 ] 00:07:28.304 { 00:07:28.304 "subsystems": [ 00:07:28.304 { 00:07:28.304 "subsystem": "bdev", 00:07:28.304 "config": [ 00:07:28.304 { 00:07:28.304 "params": { 00:07:28.304 "trtype": "pcie", 00:07:28.304 "traddr": "0000:00:10.0", 00:07:28.304 "name": "Nvme0" 00:07:28.304 }, 00:07:28.304 "method": "bdev_nvme_attach_controller" 00:07:28.304 }, 00:07:28.304 { 00:07:28.304 "method": "bdev_wait_for_examine" 00:07:28.304 } 00:07:28.304 ] 00:07:28.304 } 00:07:28.304 ] 00:07:28.304 } 00:07:28.563 [2024-12-06 09:44:53.603598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.563 [2024-12-06 09:44:53.653178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.563 [2024-12-06 09:44:53.707080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.563  [2024-12-06T09:44:54.094Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:28.822 00:07:28.822 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:28.822 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:28.822 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:28.822 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:28.822 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:28.822 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:28.822 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:28.822 09:44:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.389 09:44:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:29.389 09:44:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:29.389 09:44:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.389 09:44:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.389 [2024-12-06 09:44:54.501338] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:29.389 [2024-12-06 09:44:54.501644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59803 ] 00:07:29.389 { 00:07:29.389 "subsystems": [ 00:07:29.389 { 00:07:29.389 "subsystem": "bdev", 00:07:29.389 "config": [ 00:07:29.389 { 00:07:29.389 "params": { 00:07:29.389 "trtype": "pcie", 00:07:29.389 "traddr": "0000:00:10.0", 00:07:29.389 "name": "Nvme0" 00:07:29.389 }, 00:07:29.389 "method": "bdev_nvme_attach_controller" 00:07:29.389 }, 00:07:29.389 { 00:07:29.389 "method": "bdev_wait_for_examine" 00:07:29.389 } 00:07:29.389 ] 00:07:29.389 } 00:07:29.389 ] 00:07:29.389 } 00:07:29.389 [2024-12-06 09:44:54.648816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.648 [2024-12-06 09:44:54.700697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.648 [2024-12-06 09:44:54.753341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.648  [2024-12-06T09:44:55.189Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:29.917 00:07:29.917 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:29.917 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:29.917 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.917 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.917 { 00:07:29.917 "subsystems": [ 00:07:29.917 { 00:07:29.917 "subsystem": "bdev", 00:07:29.917 "config": [ 00:07:29.917 { 00:07:29.917 "params": { 00:07:29.917 "trtype": "pcie", 00:07:29.917 "traddr": "0000:00:10.0", 00:07:29.917 "name": "Nvme0" 00:07:29.917 }, 00:07:29.917 "method": "bdev_nvme_attach_controller" 00:07:29.917 }, 00:07:29.917 { 00:07:29.917 "method": "bdev_wait_for_examine" 00:07:29.917 } 00:07:29.917 ] 00:07:29.917 } 00:07:29.917 ] 00:07:29.917 } 00:07:29.917 [2024-12-06 09:44:55.099249] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:29.917 [2024-12-06 09:44:55.099348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59817 ] 00:07:30.181 [2024-12-06 09:44:55.243746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.181 [2024-12-06 09:44:55.284679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.181 [2024-12-06 09:44:55.338997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.440  [2024-12-06T09:44:55.712Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:30.440 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.440 09:44:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.440 { 00:07:30.440 "subsystems": [ 00:07:30.440 { 00:07:30.440 "subsystem": "bdev", 00:07:30.440 "config": [ 00:07:30.440 { 00:07:30.440 "params": { 00:07:30.440 "trtype": "pcie", 00:07:30.440 "traddr": "0000:00:10.0", 00:07:30.440 "name": "Nvme0" 00:07:30.440 }, 00:07:30.440 "method": "bdev_nvme_attach_controller" 00:07:30.440 }, 00:07:30.440 { 00:07:30.440 "method": "bdev_wait_for_examine" 00:07:30.440 } 00:07:30.440 ] 00:07:30.440 } 00:07:30.440 ] 00:07:30.440 } 00:07:30.440 [2024-12-06 09:44:55.703494] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:30.440 [2024-12-06 09:44:55.703643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:07:30.700 [2024-12-06 09:44:55.846471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.700 [2024-12-06 09:44:55.894201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.700 [2024-12-06 09:44:55.946666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.964  [2024-12-06T09:44:56.496Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:31.224 00:07:31.224 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:31.224 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:31.224 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:31.224 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:31.224 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:31.224 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:31.224 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.483 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:31.483 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:31.483 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:31.483 09:44:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.742 { 00:07:31.742 "subsystems": [ 00:07:31.742 { 00:07:31.742 "subsystem": "bdev", 00:07:31.742 "config": [ 00:07:31.742 { 00:07:31.742 "params": { 00:07:31.742 "trtype": "pcie", 00:07:31.742 "traddr": "0000:00:10.0", 00:07:31.742 "name": "Nvme0" 00:07:31.742 }, 00:07:31.742 "method": "bdev_nvme_attach_controller" 00:07:31.742 }, 00:07:31.742 { 00:07:31.742 "method": "bdev_wait_for_examine" 00:07:31.742 } 00:07:31.742 ] 00:07:31.742 } 00:07:31.742 ] 00:07:31.742 } 00:07:31.742 [2024-12-06 09:44:56.757074] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:31.742 [2024-12-06 09:44:56.757382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59857 ] 00:07:31.742 [2024-12-06 09:44:56.902796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.742 [2024-12-06 09:44:56.942975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.742 [2024-12-06 09:44:56.994269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.001  [2024-12-06T09:44:57.532Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:32.260 00:07:32.260 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:32.260 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:32.260 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.260 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.260 [2024-12-06 09:44:57.337836] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:32.260 [2024-12-06 09:44:57.337948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59865 ] 00:07:32.260 { 00:07:32.260 "subsystems": [ 00:07:32.260 { 00:07:32.260 "subsystem": "bdev", 00:07:32.260 "config": [ 00:07:32.260 { 00:07:32.260 "params": { 00:07:32.260 "trtype": "pcie", 00:07:32.260 "traddr": "0000:00:10.0", 00:07:32.260 "name": "Nvme0" 00:07:32.260 }, 00:07:32.260 "method": "bdev_nvme_attach_controller" 00:07:32.260 }, 00:07:32.260 { 00:07:32.260 "method": "bdev_wait_for_examine" 00:07:32.260 } 00:07:32.260 ] 00:07:32.260 } 00:07:32.260 ] 00:07:32.260 } 00:07:32.260 [2024-12-06 09:44:57.483409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.519 [2024-12-06 09:44:57.538075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.519 [2024-12-06 09:44:57.594139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.519  [2024-12-06T09:44:58.050Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:32.778 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.778 09:44:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.778 { 00:07:32.778 "subsystems": [ 00:07:32.778 { 00:07:32.778 "subsystem": "bdev", 00:07:32.778 "config": [ 00:07:32.778 { 00:07:32.778 "params": { 00:07:32.778 "trtype": "pcie", 00:07:32.778 "traddr": "0000:00:10.0", 00:07:32.778 "name": "Nvme0" 00:07:32.778 }, 00:07:32.778 "method": "bdev_nvme_attach_controller" 00:07:32.778 }, 00:07:32.778 { 00:07:32.778 "method": "bdev_wait_for_examine" 00:07:32.778 } 00:07:32.778 ] 00:07:32.778 } 00:07:32.778 ] 00:07:32.778 } 00:07:32.778 [2024-12-06 09:44:57.955392] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:32.778 [2024-12-06 09:44:57.955512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59886 ] 00:07:33.037 [2024-12-06 09:44:58.099256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.037 [2024-12-06 09:44:58.135671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.037 [2024-12-06 09:44:58.186876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.037  [2024-12-06T09:44:58.581Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:33.309 00:07:33.309 00:07:33.309 real 0m13.840s 00:07:33.309 user 0m10.019s 00:07:33.309 sys 0m5.335s 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.309 ************************************ 00:07:33.309 END TEST dd_rw 00:07:33.309 ************************************ 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.309 ************************************ 00:07:33.309 START TEST dd_rw_offset 00:07:33.309 ************************************ 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:33.309 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:33.596 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:33.596 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=qyp14thtuv4fgvm5uxaabw2ypfn2f0a4zdk3rox2ob5jpylj9uz86om2pbww4advhos3x5cx19ornvyjodbex8maqo9aw2d9qzqbfc8ancvwsk1ljlneyxmrjh2yrmxtqd03rxo18nb9mm6x4fgy0dh1juf9q03ep61whzsukvg6ixez2yttezjw38iwydzz1syp44ylvx4sa17zetvpfxzr01bgivrjltkv57129pd6lc7pqxv67sk5j5zkeaqx7074p4yqq7xjwv7veu407daz9866mc9uo00iqoahgnnts74znt9vh2kki9kxg2h2171o6x841v9ntatfk8cink2h76a954ynsucvq8qf6kp08wu3zbefi009ys0stymplyny46mhqt4f1g3afvdaxqk842bln6wjg339qfwprqtm6mod9go04f6j0gg2ngla14koeeh6xgpj1bbrlkk2u9klz1qsmzlo4dm3592hlkig9djwz6wsunvdeo3y9re19qgwr23i2i1776tz5m5hapf5oai0dag2vn7bbptxqug86i1pf3qrh1d8rd52ggx5g9mw4f5rgqop6a9jx1tz4pycjuktxktq7zrcmmsyi7k6nwgr7qt549rgljqq3gfvvz8vl070d44i5cwpto3eqptlslsvjxv8k5fqpmzcl4blpyowajar5ozh2kbrobke9dxoc95gepd8wcwpyc1auw2bymhnur8o9npuw4ekh5cdaf3c9kwc8hop8onjoks3d0v5twzx6kdb1pu7h26ryfvnzeij3h86nyku76qsqpp8jjgs6b52wma6wbjdqqrardo4b2lft6koegqldg50w1kbmw2jglqggy0akvd1t6k1x6iywjflgb87h9plzrtj8quqbh6otrre6lke89ao5utttdsdy7c5yrrp0qzpjuvvfokxrfcvuae4lfv1zicxvn0j557a7mvo8x3jao6xr60dktol2h87z4aiz5v41vej6gcxkkeddh3bhbnc705tm7cl7w16igwkolrnvdx27s177q074tiuxqquot9c93rja0olrei3ql3d030ztzyhuxxo1t2e8qhu1bka0c49vo55dadssfyb87umslm85l205stb8p7k6r3h7bgpqjgnivwfvcb7lpkjdkv3hdanj1uaiygt5ijszhvp3z1za6iprs3ekbc2ge9xefh3zyfbdi2kynd0vj73yc9gdo87cfyg2e56h5cxzpgy3u9o7lar92fzvxsyfi3u2r4mgi22hgsxf1gpgvbcvmwnvzzvl8ce01g9m0u0dvfcx31wrfp28wf6de40nuqf7u8nmdrokw2dcusqrvnr2n33kqtygeekbxzsejw6ukj92paw5i1kzb4l13aon4ae9gg0os1728pzx4t3xgyallomzhby4yo9fxp0rr5n4bvrwtzxjg7ilnm9ngup270h34eumzxcsd2g1xzejl4qp6r7mv1x7rkpwzsi5gqwmtybjfvlfc4gjpewx5mttzb2rwfio7j3p549xcz6uqb333ura248c71dq7xw58m4w979ptf6h4gn7507epmmarqsl0jqf84ofmlbl2ge5h7gcc58e26915whczas3jt7y5npr46leqzdw60nzlj8v4hl0025zuq8stebj8x53uw8x6bv318jj2ai6n4edoh2kbh6ckf7oeyw3vyb5u66o1eiqqytrndqqkldcoee1ts3e41s712jmpm5xsbt1f7cg1g00hblbs7jltcdg0dgk7wnu6p83n4do3chyadchew9sw2kizo03g6aqdw5envf79rxgp9jhsa5vp12zpwhwzgoz86dijdto045qxcewmv2199ji49yt8pydrqr8n8yamunuzhmv5o9yg7strklb65b2asc5j3bhmwk5cl9j4b2wtfa6l9lzr30l31nncjx9xegt7a639ydiuilo0zc0g659uxfybj4klcu56dhdfbvyq90ocgza693goqx40lnkyydgh2e3dsd1rbw1vwn0v7k29et8k14esi6yifoixa4gajs0qrlli67upcpdv48bk7s420jktf36jduzc064eqj2rqnqjr35af35d4t5cg4ogc97jav737hhbcctvuzqfoj99ruo5164f96jtwn4hyht533z467yunskt4an75i8tg2wcs137buoxyusf8y9ma8uxi4vrfponesuu9iq36966olwluqjn7kav9pu4sbhjyp4m1wtac8o246m739cauclp13v9kplsw173vxm1u01x6k2p2hnj3580on2hkh4eafawmy6iawx4juqe24xcb7eozxzwscdr8j9hluj1wbubg6c6jqg0zbt305rzviwgc79x7a19egugb4plo3l3xxkziggif5d26awf9hf5n8i5g96h06ob4xzoy1sz1hc6ycpkjly5pxgdnz3uadzid0n8aw0l9buek2pyrlxcwpg50lthieoly4wtr6h7quxmws4ohwp58lzwad5whb8f5xvsspbckjt726fi05p63nbrbjntfd741xwryv3bio2nf0fs8y5y485fyn5kckgs2fk9e5bg71239trx992cxcy0wfjbyl26tr6ij67b7hbeoxp7b7ps9bemep2f1dz1r688jc3hzngwhmt3zd9mq6jrwmfcvrkh0fvwvj96jmfuu4i9vqb14xgxdg6lkzw0k31emveog4kvx9bcycxx0rfj7mhop8aby6ehui75wzmtu3vxn8g4jk1wck47cjo7t85r19llyb03vf8zs36dbd06zpp1utmksqkag7pgsyngq0h8gcxhrb7pqrls8cxuvokhbtr0w6r3np7qimn27mogtcjx0i2wwu5axklh1b1v36rkb77ovidgyec3v1mnv8k9ak56q1ny0lluraum6qvbi3a4suf1e8srhtdp4i8541pyif7f3euu0ip843brhrflhi8thfgouqyy7nfo3swuxdxbjn7vfb8x5okozs8q83ovzw8ni5iqz3m9rehzcvkkcysb9rln69lymkbmvpd0iuv8htpa1f6l2ndjtpn4fkrwc7x8pmhowf1jhd4nlvr9gdmhk76c7ewpj44zzojsjeius1idek6gu3wmer58o6r8tbn6floxkvs33ikknyzl7hvsdfw4cih6b3m3zpyavibh9rr6lb2g8xlacdmoia37ixh8392ugc9seadfcm4khwfddhrpsxm0jdudnxjxjp1p59rzh3ova44ijel0d4j9uerfipb8e0v4hpyv10yf26l0kkkb7hcbhfwcfzn64xzgr9prusw027plocuak3d5tro1iod0hedd4covhbnafafivyqwnss12ycthx9efcxwsab18vdg93opuyffcc6vwfeyb0oz571z238241n5ubac77mrjclnzna9fsaaa5wzzdkmuid3r9id2nmrpck4jc7lkh7r8i13jz2y9e1h0umepqjzy1kexj0nunvo7jx9f2cp9n9q8ha4fqc68xcmuz40z5pux6qi5jha2awysv8n8xxl26o0o6tztfgpocwbuy6630078mgxu22dk2cl3yt6y7avpmq2oayiygx60z9xc1urg7ktgqaqmjgkkihrmh6n1zbahhhayse3ps1xl07b5mm9ss8xnjyxg1grcpkkv9y0x18yli8ijgk7e05uvoabvut98s1slbwfff5ss6k9lrfhdv05e2ex1gr3t7ppmvi0grlncutbfdwp0k9toiw7w3cdfrado6wpvcnhlwyvu68lwbk7bmsd63gzf3vtqtcxkmdpezyuszap81rrtj1tv1v7qjwxcf2npvbdci4pfonhqhxgjgcdt3uxoaqv8c13su0tjeg36rhkm9ld7uapb9gkubzdenrpwqjzcc8629htz1wxe2pbt9eqbtnoisz4qbck4xvn5fgoyst04c8k1twcu56etqavfvy4wpt3g7ihc0cvdml03365djighizdubrkzzjj1q7siex99iwnml2fydc107itbnswr1c5bb72fq5m59oj76xa6zg7vjcplow5xabg7lh4m29cd4 00:07:33.596 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:33.596 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:33.596 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:33.596 09:44:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:33.596 { 00:07:33.596 "subsystems": [ 00:07:33.596 { 00:07:33.596 "subsystem": "bdev", 00:07:33.596 "config": [ 00:07:33.596 { 00:07:33.596 "params": { 00:07:33.596 "trtype": "pcie", 00:07:33.596 "traddr": "0000:00:10.0", 00:07:33.596 "name": "Nvme0" 00:07:33.596 }, 00:07:33.596 "method": "bdev_nvme_attach_controller" 00:07:33.596 }, 00:07:33.596 { 00:07:33.596 "method": "bdev_wait_for_examine" 00:07:33.596 } 00:07:33.596 ] 00:07:33.596 } 00:07:33.596 ] 00:07:33.596 } 00:07:33.596 [2024-12-06 09:44:58.643597] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:33.596 [2024-12-06 09:44:58.643724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:07:33.596 [2024-12-06 09:44:58.790374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.596 [2024-12-06 09:44:58.829937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.866 [2024-12-06 09:44:58.884425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.866  [2024-12-06T09:44:59.397Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:34.125 00:07:34.125 09:44:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:34.125 09:44:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:34.125 09:44:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:34.125 09:44:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:34.125 { 00:07:34.125 "subsystems": [ 00:07:34.125 { 00:07:34.125 "subsystem": "bdev", 00:07:34.125 "config": [ 00:07:34.125 { 00:07:34.125 "params": { 00:07:34.125 "trtype": "pcie", 00:07:34.125 "traddr": "0000:00:10.0", 00:07:34.125 "name": "Nvme0" 00:07:34.125 }, 00:07:34.125 "method": "bdev_nvme_attach_controller" 00:07:34.125 }, 00:07:34.125 { 00:07:34.125 "method": "bdev_wait_for_examine" 00:07:34.125 } 00:07:34.125 ] 00:07:34.125 } 00:07:34.125 ] 00:07:34.125 } 00:07:34.125 [2024-12-06 09:44:59.232156] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:34.125 [2024-12-06 09:44:59.232970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59930 ] 00:07:34.125 [2024-12-06 09:44:59.379266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.385 [2024-12-06 09:44:59.422245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.385 [2024-12-06 09:44:59.474410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.385  [2024-12-06T09:44:59.918Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:34.646 00:07:34.646 ************************************ 00:07:34.646 END TEST dd_rw_offset 00:07:34.646 ************************************ 00:07:34.646 09:44:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ qyp14thtuv4fgvm5uxaabw2ypfn2f0a4zdk3rox2ob5jpylj9uz86om2pbww4advhos3x5cx19ornvyjodbex8maqo9aw2d9qzqbfc8ancvwsk1ljlneyxmrjh2yrmxtqd03rxo18nb9mm6x4fgy0dh1juf9q03ep61whzsukvg6ixez2yttezjw38iwydzz1syp44ylvx4sa17zetvpfxzr01bgivrjltkv57129pd6lc7pqxv67sk5j5zkeaqx7074p4yqq7xjwv7veu407daz9866mc9uo00iqoahgnnts74znt9vh2kki9kxg2h2171o6x841v9ntatfk8cink2h76a954ynsucvq8qf6kp08wu3zbefi009ys0stymplyny46mhqt4f1g3afvdaxqk842bln6wjg339qfwprqtm6mod9go04f6j0gg2ngla14koeeh6xgpj1bbrlkk2u9klz1qsmzlo4dm3592hlkig9djwz6wsunvdeo3y9re19qgwr23i2i1776tz5m5hapf5oai0dag2vn7bbptxqug86i1pf3qrh1d8rd52ggx5g9mw4f5rgqop6a9jx1tz4pycjuktxktq7zrcmmsyi7k6nwgr7qt549rgljqq3gfvvz8vl070d44i5cwpto3eqptlslsvjxv8k5fqpmzcl4blpyowajar5ozh2kbrobke9dxoc95gepd8wcwpyc1auw2bymhnur8o9npuw4ekh5cdaf3c9kwc8hop8onjoks3d0v5twzx6kdb1pu7h26ryfvnzeij3h86nyku76qsqpp8jjgs6b52wma6wbjdqqrardo4b2lft6koegqldg50w1kbmw2jglqggy0akvd1t6k1x6iywjflgb87h9plzrtj8quqbh6otrre6lke89ao5utttdsdy7c5yrrp0qzpjuvvfokxrfcvuae4lfv1zicxvn0j557a7mvo8x3jao6xr60dktol2h87z4aiz5v41vej6gcxkkeddh3bhbnc705tm7cl7w16igwkolrnvdx27s177q074tiuxqquot9c93rja0olrei3ql3d030ztzyhuxxo1t2e8qhu1bka0c49vo55dadssfyb87umslm85l205stb8p7k6r3h7bgpqjgnivwfvcb7lpkjdkv3hdanj1uaiygt5ijszhvp3z1za6iprs3ekbc2ge9xefh3zyfbdi2kynd0vj73yc9gdo87cfyg2e56h5cxzpgy3u9o7lar92fzvxsyfi3u2r4mgi22hgsxf1gpgvbcvmwnvzzvl8ce01g9m0u0dvfcx31wrfp28wf6de40nuqf7u8nmdrokw2dcusqrvnr2n33kqtygeekbxzsejw6ukj92paw5i1kzb4l13aon4ae9gg0os1728pzx4t3xgyallomzhby4yo9fxp0rr5n4bvrwtzxjg7ilnm9ngup270h34eumzxcsd2g1xzejl4qp6r7mv1x7rkpwzsi5gqwmtybjfvlfc4gjpewx5mttzb2rwfio7j3p549xcz6uqb333ura248c71dq7xw58m4w979ptf6h4gn7507epmmarqsl0jqf84ofmlbl2ge5h7gcc58e26915whczas3jt7y5npr46leqzdw60nzlj8v4hl0025zuq8stebj8x53uw8x6bv318jj2ai6n4edoh2kbh6ckf7oeyw3vyb5u66o1eiqqytrndqqkldcoee1ts3e41s712jmpm5xsbt1f7cg1g00hblbs7jltcdg0dgk7wnu6p83n4do3chyadchew9sw2kizo03g6aqdw5envf79rxgp9jhsa5vp12zpwhwzgoz86dijdto045qxcewmv2199ji49yt8pydrqr8n8yamunuzhmv5o9yg7strklb65b2asc5j3bhmwk5cl9j4b2wtfa6l9lzr30l31nncjx9xegt7a639ydiuilo0zc0g659uxfybj4klcu56dhdfbvyq90ocgza693goqx40lnkyydgh2e3dsd1rbw1vwn0v7k29et8k14esi6yifoixa4gajs0qrlli67upcpdv48bk7s420jktf36jduzc064eqj2rqnqjr35af35d4t5cg4ogc97jav737hhbcctvuzqfoj99ruo5164f96jtwn4hyht533z467yunskt4an75i8tg2wcs137buoxyusf8y9ma8uxi4vrfponesuu9iq36966olwluqjn7kav9pu4sbhjyp4m1wtac8o246m739cauclp13v9kplsw173vxm1u01x6k2p2hnj3580on2hkh4eafawmy6iawx4juqe24xcb7eozxzwscdr8j9hluj1wbubg6c6jqg0zbt305rzviwgc79x7a19egugb4plo3l3xxkziggif5d26awf9hf5n8i5g96h06ob4xzoy1sz1hc6ycpkjly5pxgdnz3uadzid0n8aw0l9buek2pyrlxcwpg50lthieoly4wtr6h7quxmws4ohwp58lzwad5whb8f5xvsspbckjt726fi05p63nbrbjntfd741xwryv3bio2nf0fs8y5y485fyn5kckgs2fk9e5bg71239trx992cxcy0wfjbyl26tr6ij67b7hbeoxp7b7ps9bemep2f1dz1r688jc3hzngwhmt3zd9mq6jrwmfcvrkh0fvwvj96jmfuu4i9vqb14xgxdg6lkzw0k31emveog4kvx9bcycxx0rfj7mhop8aby6ehui75wzmtu3vxn8g4jk1wck47cjo7t85r19llyb03vf8zs36dbd06zpp1utmksqkag7pgsyngq0h8gcxhrb7pqrls8cxuvokhbtr0w6r3np7qimn27mogtcjx0i2wwu5axklh1b1v36rkb77ovidgyec3v1mnv8k9ak56q1ny0lluraum6qvbi3a4suf1e8srhtdp4i8541pyif7f3euu0ip843brhrflhi8thfgouqyy7nfo3swuxdxbjn7vfb8x5okozs8q83ovzw8ni5iqz3m9rehzcvkkcysb9rln69lymkbmvpd0iuv8htpa1f6l2ndjtpn4fkrwc7x8pmhowf1jhd4nlvr9gdmhk76c7ewpj44zzojsjeius1idek6gu3wmer58o6r8tbn6floxkvs33ikknyzl7hvsdfw4cih6b3m3zpyavibh9rr6lb2g8xlacdmoia37ixh8392ugc9seadfcm4khwfddhrpsxm0jdudnxjxjp1p59rzh3ova44ijel0d4j9uerfipb8e0v4hpyv10yf26l0kkkb7hcbhfwcfzn64xzgr9prusw027plocuak3d5tro1iod0hedd4covhbnafafivyqwnss12ycthx9efcxwsab18vdg93opuyffcc6vwfeyb0oz571z238241n5ubac77mrjclnzna9fsaaa5wzzdkmuid3r9id2nmrpck4jc7lkh7r8i13jz2y9e1h0umepqjzy1kexj0nunvo7jx9f2cp9n9q8ha4fqc68xcmuz40z5pux6qi5jha2awysv8n8xxl26o0o6tztfgpocwbuy6630078mgxu22dk2cl3yt6y7avpmq2oayiygx60z9xc1urg7ktgqaqmjgkkihrmh6n1zbahhhayse3ps1xl07b5mm9ss8xnjyxg1grcpkkv9y0x18yli8ijgk7e05uvoabvut98s1slbwfff5ss6k9lrfhdv05e2ex1gr3t7ppmvi0grlncutbfdwp0k9toiw7w3cdfrado6wpvcnhlwyvu68lwbk7bmsd63gzf3vtqtcxkmdpezyuszap81rrtj1tv1v7qjwxcf2npvbdci4pfonhqhxgjgcdt3uxoaqv8c13su0tjeg36rhkm9ld7uapb9gkubzdenrpwqjzcc8629htz1wxe2pbt9eqbtnoisz4qbck4xvn5fgoyst04c8k1twcu56etqavfvy4wpt3g7ihc0cvdml03365djighizdubrkzzjj1q7siex99iwnml2fydc107itbnswr1c5bb72fq5m59oj76xa6zg7vjcplow5xabg7lh4m29cd4 == \q\y\p\1\4\t\h\t\u\v\4\f\g\v\m\5\u\x\a\a\b\w\2\y\p\f\n\2\f\0\a\4\z\d\k\3\r\o\x\2\o\b\5\j\p\y\l\j\9\u\z\8\6\o\m\2\p\b\w\w\4\a\d\v\h\o\s\3\x\5\c\x\1\9\o\r\n\v\y\j\o\d\b\e\x\8\m\a\q\o\9\a\w\2\d\9\q\z\q\b\f\c\8\a\n\c\v\w\s\k\1\l\j\l\n\e\y\x\m\r\j\h\2\y\r\m\x\t\q\d\0\3\r\x\o\1\8\n\b\9\m\m\6\x\4\f\g\y\0\d\h\1\j\u\f\9\q\0\3\e\p\6\1\w\h\z\s\u\k\v\g\6\i\x\e\z\2\y\t\t\e\z\j\w\3\8\i\w\y\d\z\z\1\s\y\p\4\4\y\l\v\x\4\s\a\1\7\z\e\t\v\p\f\x\z\r\0\1\b\g\i\v\r\j\l\t\k\v\5\7\1\2\9\p\d\6\l\c\7\p\q\x\v\6\7\s\k\5\j\5\z\k\e\a\q\x\7\0\7\4\p\4\y\q\q\7\x\j\w\v\7\v\e\u\4\0\7\d\a\z\9\8\6\6\m\c\9\u\o\0\0\i\q\o\a\h\g\n\n\t\s\7\4\z\n\t\9\v\h\2\k\k\i\9\k\x\g\2\h\2\1\7\1\o\6\x\8\4\1\v\9\n\t\a\t\f\k\8\c\i\n\k\2\h\7\6\a\9\5\4\y\n\s\u\c\v\q\8\q\f\6\k\p\0\8\w\u\3\z\b\e\f\i\0\0\9\y\s\0\s\t\y\m\p\l\y\n\y\4\6\m\h\q\t\4\f\1\g\3\a\f\v\d\a\x\q\k\8\4\2\b\l\n\6\w\j\g\3\3\9\q\f\w\p\r\q\t\m\6\m\o\d\9\g\o\0\4\f\6\j\0\g\g\2\n\g\l\a\1\4\k\o\e\e\h\6\x\g\p\j\1\b\b\r\l\k\k\2\u\9\k\l\z\1\q\s\m\z\l\o\4\d\m\3\5\9\2\h\l\k\i\g\9\d\j\w\z\6\w\s\u\n\v\d\e\o\3\y\9\r\e\1\9\q\g\w\r\2\3\i\2\i\1\7\7\6\t\z\5\m\5\h\a\p\f\5\o\a\i\0\d\a\g\2\v\n\7\b\b\p\t\x\q\u\g\8\6\i\1\p\f\3\q\r\h\1\d\8\r\d\5\2\g\g\x\5\g\9\m\w\4\f\5\r\g\q\o\p\6\a\9\j\x\1\t\z\4\p\y\c\j\u\k\t\x\k\t\q\7\z\r\c\m\m\s\y\i\7\k\6\n\w\g\r\7\q\t\5\4\9\r\g\l\j\q\q\3\g\f\v\v\z\8\v\l\0\7\0\d\4\4\i\5\c\w\p\t\o\3\e\q\p\t\l\s\l\s\v\j\x\v\8\k\5\f\q\p\m\z\c\l\4\b\l\p\y\o\w\a\j\a\r\5\o\z\h\2\k\b\r\o\b\k\e\9\d\x\o\c\9\5\g\e\p\d\8\w\c\w\p\y\c\1\a\u\w\2\b\y\m\h\n\u\r\8\o\9\n\p\u\w\4\e\k\h\5\c\d\a\f\3\c\9\k\w\c\8\h\o\p\8\o\n\j\o\k\s\3\d\0\v\5\t\w\z\x\6\k\d\b\1\p\u\7\h\2\6\r\y\f\v\n\z\e\i\j\3\h\8\6\n\y\k\u\7\6\q\s\q\p\p\8\j\j\g\s\6\b\5\2\w\m\a\6\w\b\j\d\q\q\r\a\r\d\o\4\b\2\l\f\t\6\k\o\e\g\q\l\d\g\5\0\w\1\k\b\m\w\2\j\g\l\q\g\g\y\0\a\k\v\d\1\t\6\k\1\x\6\i\y\w\j\f\l\g\b\8\7\h\9\p\l\z\r\t\j\8\q\u\q\b\h\6\o\t\r\r\e\6\l\k\e\8\9\a\o\5\u\t\t\t\d\s\d\y\7\c\5\y\r\r\p\0\q\z\p\j\u\v\v\f\o\k\x\r\f\c\v\u\a\e\4\l\f\v\1\z\i\c\x\v\n\0\j\5\5\7\a\7\m\v\o\8\x\3\j\a\o\6\x\r\6\0\d\k\t\o\l\2\h\8\7\z\4\a\i\z\5\v\4\1\v\e\j\6\g\c\x\k\k\e\d\d\h\3\b\h\b\n\c\7\0\5\t\m\7\c\l\7\w\1\6\i\g\w\k\o\l\r\n\v\d\x\2\7\s\1\7\7\q\0\7\4\t\i\u\x\q\q\u\o\t\9\c\9\3\r\j\a\0\o\l\r\e\i\3\q\l\3\d\0\3\0\z\t\z\y\h\u\x\x\o\1\t\2\e\8\q\h\u\1\b\k\a\0\c\4\9\v\o\5\5\d\a\d\s\s\f\y\b\8\7\u\m\s\l\m\8\5\l\2\0\5\s\t\b\8\p\7\k\6\r\3\h\7\b\g\p\q\j\g\n\i\v\w\f\v\c\b\7\l\p\k\j\d\k\v\3\h\d\a\n\j\1\u\a\i\y\g\t\5\i\j\s\z\h\v\p\3\z\1\z\a\6\i\p\r\s\3\e\k\b\c\2\g\e\9\x\e\f\h\3\z\y\f\b\d\i\2\k\y\n\d\0\v\j\7\3\y\c\9\g\d\o\8\7\c\f\y\g\2\e\5\6\h\5\c\x\z\p\g\y\3\u\9\o\7\l\a\r\9\2\f\z\v\x\s\y\f\i\3\u\2\r\4\m\g\i\2\2\h\g\s\x\f\1\g\p\g\v\b\c\v\m\w\n\v\z\z\v\l\8\c\e\0\1\g\9\m\0\u\0\d\v\f\c\x\3\1\w\r\f\p\2\8\w\f\6\d\e\4\0\n\u\q\f\7\u\8\n\m\d\r\o\k\w\2\d\c\u\s\q\r\v\n\r\2\n\3\3\k\q\t\y\g\e\e\k\b\x\z\s\e\j\w\6\u\k\j\9\2\p\a\w\5\i\1\k\z\b\4\l\1\3\a\o\n\4\a\e\9\g\g\0\o\s\1\7\2\8\p\z\x\4\t\3\x\g\y\a\l\l\o\m\z\h\b\y\4\y\o\9\f\x\p\0\r\r\5\n\4\b\v\r\w\t\z\x\j\g\7\i\l\n\m\9\n\g\u\p\2\7\0\h\3\4\e\u\m\z\x\c\s\d\2\g\1\x\z\e\j\l\4\q\p\6\r\7\m\v\1\x\7\r\k\p\w\z\s\i\5\g\q\w\m\t\y\b\j\f\v\l\f\c\4\g\j\p\e\w\x\5\m\t\t\z\b\2\r\w\f\i\o\7\j\3\p\5\4\9\x\c\z\6\u\q\b\3\3\3\u\r\a\2\4\8\c\7\1\d\q\7\x\w\5\8\m\4\w\9\7\9\p\t\f\6\h\4\g\n\7\5\0\7\e\p\m\m\a\r\q\s\l\0\j\q\f\8\4\o\f\m\l\b\l\2\g\e\5\h\7\g\c\c\5\8\e\2\6\9\1\5\w\h\c\z\a\s\3\j\t\7\y\5\n\p\r\4\6\l\e\q\z\d\w\6\0\n\z\l\j\8\v\4\h\l\0\0\2\5\z\u\q\8\s\t\e\b\j\8\x\5\3\u\w\8\x\6\b\v\3\1\8\j\j\2\a\i\6\n\4\e\d\o\h\2\k\b\h\6\c\k\f\7\o\e\y\w\3\v\y\b\5\u\6\6\o\1\e\i\q\q\y\t\r\n\d\q\q\k\l\d\c\o\e\e\1\t\s\3\e\4\1\s\7\1\2\j\m\p\m\5\x\s\b\t\1\f\7\c\g\1\g\0\0\h\b\l\b\s\7\j\l\t\c\d\g\0\d\g\k\7\w\n\u\6\p\8\3\n\4\d\o\3\c\h\y\a\d\c\h\e\w\9\s\w\2\k\i\z\o\0\3\g\6\a\q\d\w\5\e\n\v\f\7\9\r\x\g\p\9\j\h\s\a\5\v\p\1\2\z\p\w\h\w\z\g\o\z\8\6\d\i\j\d\t\o\0\4\5\q\x\c\e\w\m\v\2\1\9\9\j\i\4\9\y\t\8\p\y\d\r\q\r\8\n\8\y\a\m\u\n\u\z\h\m\v\5\o\9\y\g\7\s\t\r\k\l\b\6\5\b\2\a\s\c\5\j\3\b\h\m\w\k\5\c\l\9\j\4\b\2\w\t\f\a\6\l\9\l\z\r\3\0\l\3\1\n\n\c\j\x\9\x\e\g\t\7\a\6\3\9\y\d\i\u\i\l\o\0\z\c\0\g\6\5\9\u\x\f\y\b\j\4\k\l\c\u\5\6\d\h\d\f\b\v\y\q\9\0\o\c\g\z\a\6\9\3\g\o\q\x\4\0\l\n\k\y\y\d\g\h\2\e\3\d\s\d\1\r\b\w\1\v\w\n\0\v\7\k\2\9\e\t\8\k\1\4\e\s\i\6\y\i\f\o\i\x\a\4\g\a\j\s\0\q\r\l\l\i\6\7\u\p\c\p\d\v\4\8\b\k\7\s\4\2\0\j\k\t\f\3\6\j\d\u\z\c\0\6\4\e\q\j\2\r\q\n\q\j\r\3\5\a\f\3\5\d\4\t\5\c\g\4\o\g\c\9\7\j\a\v\7\3\7\h\h\b\c\c\t\v\u\z\q\f\o\j\9\9\r\u\o\5\1\6\4\f\9\6\j\t\w\n\4\h\y\h\t\5\3\3\z\4\6\7\y\u\n\s\k\t\4\a\n\7\5\i\8\t\g\2\w\c\s\1\3\7\b\u\o\x\y\u\s\f\8\y\9\m\a\8\u\x\i\4\v\r\f\p\o\n\e\s\u\u\9\i\q\3\6\9\6\6\o\l\w\l\u\q\j\n\7\k\a\v\9\p\u\4\s\b\h\j\y\p\4\m\1\w\t\a\c\8\o\2\4\6\m\7\3\9\c\a\u\c\l\p\1\3\v\9\k\p\l\s\w\1\7\3\v\x\m\1\u\0\1\x\6\k\2\p\2\h\n\j\3\5\8\0\o\n\2\h\k\h\4\e\a\f\a\w\m\y\6\i\a\w\x\4\j\u\q\e\2\4\x\c\b\7\e\o\z\x\z\w\s\c\d\r\8\j\9\h\l\u\j\1\w\b\u\b\g\6\c\6\j\q\g\0\z\b\t\3\0\5\r\z\v\i\w\g\c\7\9\x\7\a\1\9\e\g\u\g\b\4\p\l\o\3\l\3\x\x\k\z\i\g\g\i\f\5\d\2\6\a\w\f\9\h\f\5\n\8\i\5\g\9\6\h\0\6\o\b\4\x\z\o\y\1\s\z\1\h\c\6\y\c\p\k\j\l\y\5\p\x\g\d\n\z\3\u\a\d\z\i\d\0\n\8\a\w\0\l\9\b\u\e\k\2\p\y\r\l\x\c\w\p\g\5\0\l\t\h\i\e\o\l\y\4\w\t\r\6\h\7\q\u\x\m\w\s\4\o\h\w\p\5\8\l\z\w\a\d\5\w\h\b\8\f\5\x\v\s\s\p\b\c\k\j\t\7\2\6\f\i\0\5\p\6\3\n\b\r\b\j\n\t\f\d\7\4\1\x\w\r\y\v\3\b\i\o\2\n\f\0\f\s\8\y\5\y\4\8\5\f\y\n\5\k\c\k\g\s\2\f\k\9\e\5\b\g\7\1\2\3\9\t\r\x\9\9\2\c\x\c\y\0\w\f\j\b\y\l\2\6\t\r\6\i\j\6\7\b\7\h\b\e\o\x\p\7\b\7\p\s\9\b\e\m\e\p\2\f\1\d\z\1\r\6\8\8\j\c\3\h\z\n\g\w\h\m\t\3\z\d\9\m\q\6\j\r\w\m\f\c\v\r\k\h\0\f\v\w\v\j\9\6\j\m\f\u\u\4\i\9\v\q\b\1\4\x\g\x\d\g\6\l\k\z\w\0\k\3\1\e\m\v\e\o\g\4\k\v\x\9\b\c\y\c\x\x\0\r\f\j\7\m\h\o\p\8\a\b\y\6\e\h\u\i\7\5\w\z\m\t\u\3\v\x\n\8\g\4\j\k\1\w\c\k\4\7\c\j\o\7\t\8\5\r\1\9\l\l\y\b\0\3\v\f\8\z\s\3\6\d\b\d\0\6\z\p\p\1\u\t\m\k\s\q\k\a\g\7\p\g\s\y\n\g\q\0\h\8\g\c\x\h\r\b\7\p\q\r\l\s\8\c\x\u\v\o\k\h\b\t\r\0\w\6\r\3\n\p\7\q\i\m\n\2\7\m\o\g\t\c\j\x\0\i\2\w\w\u\5\a\x\k\l\h\1\b\1\v\3\6\r\k\b\7\7\o\v\i\d\g\y\e\c\3\v\1\m\n\v\8\k\9\a\k\5\6\q\1\n\y\0\l\l\u\r\a\u\m\6\q\v\b\i\3\a\4\s\u\f\1\e\8\s\r\h\t\d\p\4\i\8\5\4\1\p\y\i\f\7\f\3\e\u\u\0\i\p\8\4\3\b\r\h\r\f\l\h\i\8\t\h\f\g\o\u\q\y\y\7\n\f\o\3\s\w\u\x\d\x\b\j\n\7\v\f\b\8\x\5\o\k\o\z\s\8\q\8\3\o\v\z\w\8\n\i\5\i\q\z\3\m\9\r\e\h\z\c\v\k\k\c\y\s\b\9\r\l\n\6\9\l\y\m\k\b\m\v\p\d\0\i\u\v\8\h\t\p\a\1\f\6\l\2\n\d\j\t\p\n\4\f\k\r\w\c\7\x\8\p\m\h\o\w\f\1\j\h\d\4\n\l\v\r\9\g\d\m\h\k\7\6\c\7\e\w\p\j\4\4\z\z\o\j\s\j\e\i\u\s\1\i\d\e\k\6\g\u\3\w\m\e\r\5\8\o\6\r\8\t\b\n\6\f\l\o\x\k\v\s\3\3\i\k\k\n\y\z\l\7\h\v\s\d\f\w\4\c\i\h\6\b\3\m\3\z\p\y\a\v\i\b\h\9\r\r\6\l\b\2\g\8\x\l\a\c\d\m\o\i\a\3\7\i\x\h\8\3\9\2\u\g\c\9\s\e\a\d\f\c\m\4\k\h\w\f\d\d\h\r\p\s\x\m\0\j\d\u\d\n\x\j\x\j\p\1\p\5\9\r\z\h\3\o\v\a\4\4\i\j\e\l\0\d\4\j\9\u\e\r\f\i\p\b\8\e\0\v\4\h\p\y\v\1\0\y\f\2\6\l\0\k\k\k\b\7\h\c\b\h\f\w\c\f\z\n\6\4\x\z\g\r\9\p\r\u\s\w\0\2\7\p\l\o\c\u\a\k\3\d\5\t\r\o\1\i\o\d\0\h\e\d\d\4\c\o\v\h\b\n\a\f\a\f\i\v\y\q\w\n\s\s\1\2\y\c\t\h\x\9\e\f\c\x\w\s\a\b\1\8\v\d\g\9\3\o\p\u\y\f\f\c\c\6\v\w\f\e\y\b\0\o\z\5\7\1\z\2\3\8\2\4\1\n\5\u\b\a\c\7\7\m\r\j\c\l\n\z\n\a\9\f\s\a\a\a\5\w\z\z\d\k\m\u\i\d\3\r\9\i\d\2\n\m\r\p\c\k\4\j\c\7\l\k\h\7\r\8\i\1\3\j\z\2\y\9\e\1\h\0\u\m\e\p\q\j\z\y\1\k\e\x\j\0\n\u\n\v\o\7\j\x\9\f\2\c\p\9\n\9\q\8\h\a\4\f\q\c\6\8\x\c\m\u\z\4\0\z\5\p\u\x\6\q\i\5\j\h\a\2\a\w\y\s\v\8\n\8\x\x\l\2\6\o\0\o\6\t\z\t\f\g\p\o\c\w\b\u\y\6\6\3\0\0\7\8\m\g\x\u\2\2\d\k\2\c\l\3\y\t\6\y\7\a\v\p\m\q\2\o\a\y\i\y\g\x\6\0\z\9\x\c\1\u\r\g\7\k\t\g\q\a\q\m\j\g\k\k\i\h\r\m\h\6\n\1\z\b\a\h\h\h\a\y\s\e\3\p\s\1\x\l\0\7\b\5\m\m\9\s\s\8\x\n\j\y\x\g\1\g\r\c\p\k\k\v\9\y\0\x\1\8\y\l\i\8\i\j\g\k\7\e\0\5\u\v\o\a\b\v\u\t\9\8\s\1\s\l\b\w\f\f\f\5\s\s\6\k\9\l\r\f\h\d\v\0\5\e\2\e\x\1\g\r\3\t\7\p\p\m\v\i\0\g\r\l\n\c\u\t\b\f\d\w\p\0\k\9\t\o\i\w\7\w\3\c\d\f\r\a\d\o\6\w\p\v\c\n\h\l\w\y\v\u\6\8\l\w\b\k\7\b\m\s\d\6\3\g\z\f\3\v\t\q\t\c\x\k\m\d\p\e\z\y\u\s\z\a\p\8\1\r\r\t\j\1\t\v\1\v\7\q\j\w\x\c\f\2\n\p\v\b\d\c\i\4\p\f\o\n\h\q\h\x\g\j\g\c\d\t\3\u\x\o\a\q\v\8\c\1\3\s\u\0\t\j\e\g\3\6\r\h\k\m\9\l\d\7\u\a\p\b\9\g\k\u\b\z\d\e\n\r\p\w\q\j\z\c\c\8\6\2\9\h\t\z\1\w\x\e\2\p\b\t\9\e\q\b\t\n\o\i\s\z\4\q\b\c\k\4\x\v\n\5\f\g\o\y\s\t\0\4\c\8\k\1\t\w\c\u\5\6\e\t\q\a\v\f\v\y\4\w\p\t\3\g\7\i\h\c\0\c\v\d\m\l\0\3\3\6\5\d\j\i\g\h\i\z\d\u\b\r\k\z\z\j\j\1\q\7\s\i\e\x\9\9\i\w\n\m\l\2\f\y\d\c\1\0\7\i\t\b\n\s\w\r\1\c\5\b\b\7\2\f\q\5\m\5\9\o\j\7\6\x\a\6\z\g\7\v\j\c\p\l\o\w\5\x\a\b\g\7\l\h\4\m\2\9\c\d\4 ]] 00:07:34.647 00:07:34.647 real 0m1.240s 00:07:34.647 user 0m0.826s 00:07:34.647 sys 0m0.586s 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.647 09:44:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.647 { 00:07:34.647 "subsystems": [ 00:07:34.647 { 00:07:34.647 "subsystem": "bdev", 00:07:34.647 "config": [ 00:07:34.647 { 00:07:34.647 "params": { 00:07:34.647 "trtype": "pcie", 00:07:34.647 "traddr": "0000:00:10.0", 00:07:34.647 "name": "Nvme0" 00:07:34.647 }, 00:07:34.647 "method": "bdev_nvme_attach_controller" 00:07:34.647 }, 00:07:34.647 { 00:07:34.647 "method": "bdev_wait_for_examine" 00:07:34.647 } 00:07:34.647 ] 00:07:34.647 } 00:07:34.647 ] 00:07:34.647 } 00:07:34.647 [2024-12-06 09:44:59.886618] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:34.647 [2024-12-06 09:44:59.886735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59965 ] 00:07:34.906 [2024-12-06 09:45:00.034363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.906 [2024-12-06 09:45:00.073664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.906 [2024-12-06 09:45:00.126171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.166  [2024-12-06T09:45:00.438Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:35.166 00:07:35.166 09:45:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.166 00:07:35.166 real 0m16.899s 00:07:35.166 user 0m11.923s 00:07:35.166 sys 0m6.578s 00:07:35.166 09:45:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.166 ************************************ 00:07:35.166 END TEST spdk_dd_basic_rw 00:07:35.166 ************************************ 00:07:35.167 09:45:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.427 09:45:00 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:35.427 09:45:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.427 09:45:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.427 09:45:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:35.427 ************************************ 00:07:35.427 START TEST spdk_dd_posix 00:07:35.427 ************************************ 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:35.427 * Looking for test storage... 00:07:35.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.427 --rc genhtml_branch_coverage=1 00:07:35.427 --rc genhtml_function_coverage=1 00:07:35.427 --rc genhtml_legend=1 00:07:35.427 --rc geninfo_all_blocks=1 00:07:35.427 --rc geninfo_unexecuted_blocks=1 00:07:35.427 00:07:35.427 ' 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.427 --rc genhtml_branch_coverage=1 00:07:35.427 --rc genhtml_function_coverage=1 00:07:35.427 --rc genhtml_legend=1 00:07:35.427 --rc geninfo_all_blocks=1 00:07:35.427 --rc geninfo_unexecuted_blocks=1 00:07:35.427 00:07:35.427 ' 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.427 --rc genhtml_branch_coverage=1 00:07:35.427 --rc genhtml_function_coverage=1 00:07:35.427 --rc genhtml_legend=1 00:07:35.427 --rc geninfo_all_blocks=1 00:07:35.427 --rc geninfo_unexecuted_blocks=1 00:07:35.427 00:07:35.427 ' 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.427 --rc genhtml_branch_coverage=1 00:07:35.427 --rc genhtml_function_coverage=1 00:07:35.427 --rc genhtml_legend=1 00:07:35.427 --rc geninfo_all_blocks=1 00:07:35.427 --rc geninfo_unexecuted_blocks=1 00:07:35.427 00:07:35.427 ' 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:35.427 09:45:00 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:35.428 * First test run, liburing in use 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.428 09:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:35.687 ************************************ 00:07:35.687 START TEST dd_flag_append 00:07:35.687 ************************************ 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=9w7p4uu977dsi2ovgazmjkq3vtjd1dww 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=s180ktxk7am69al68vqfj1kzma5ih403 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 9w7p4uu977dsi2ovgazmjkq3vtjd1dww 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s s180ktxk7am69al68vqfj1kzma5ih403 00:07:35.687 09:45:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:35.687 [2024-12-06 09:45:00.754459] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:35.687 [2024-12-06 09:45:00.754681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60031 ] 00:07:35.687 [2024-12-06 09:45:00.891970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.687 [2024-12-06 09:45:00.932345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.945 [2024-12-06 09:45:00.984534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.946  [2024-12-06T09:45:01.218Z] Copying: 32/32 [B] (average 31 kBps) 00:07:35.946 00:07:35.946 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ s180ktxk7am69al68vqfj1kzma5ih4039w7p4uu977dsi2ovgazmjkq3vtjd1dww == \s\1\8\0\k\t\x\k\7\a\m\6\9\a\l\6\8\v\q\f\j\1\k\z\m\a\5\i\h\4\0\3\9\w\7\p\4\u\u\9\7\7\d\s\i\2\o\v\g\a\z\m\j\k\q\3\v\t\j\d\1\d\w\w ]] 00:07:35.946 00:07:35.946 real 0m0.487s 00:07:35.946 user 0m0.228s 00:07:35.946 sys 0m0.273s 00:07:35.946 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.946 ************************************ 00:07:35.946 END TEST dd_flag_append 00:07:35.946 ************************************ 00:07:35.946 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:36.204 ************************************ 00:07:36.204 START TEST dd_flag_directory 00:07:36.204 ************************************ 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.204 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.204 [2024-12-06 09:45:01.311980] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:36.204 [2024-12-06 09:45:01.312108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60060 ] 00:07:36.204 [2024-12-06 09:45:01.455409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.464 [2024-12-06 09:45:01.504515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.464 [2024-12-06 09:45:01.556051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.464 [2024-12-06 09:45:01.589424] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:36.464 [2024-12-06 09:45:01.589480] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:36.464 [2024-12-06 09:45:01.589509] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.464 [2024-12-06 09:45:01.713862] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:36.723 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.724 09:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:36.724 [2024-12-06 09:45:01.848802] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:36.724 [2024-12-06 09:45:01.849099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60069 ] 00:07:36.983 [2024-12-06 09:45:01.997711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.983 [2024-12-06 09:45:02.047119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.983 [2024-12-06 09:45:02.103622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.983 [2024-12-06 09:45:02.141760] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:36.983 [2024-12-06 09:45:02.141828] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:36.983 [2024-12-06 09:45:02.141855] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.243 [2024-12-06 09:45:02.258684] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.243 00:07:37.243 real 0m1.067s 00:07:37.243 user 0m0.578s 00:07:37.243 sys 0m0.276s 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 ************************************ 00:07:37.243 END TEST dd_flag_directory 00:07:37.243 ************************************ 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 ************************************ 00:07:37.243 START TEST dd_flag_nofollow 00:07:37.243 ************************************ 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.243 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.243 [2024-12-06 09:45:02.430930] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:37.243 [2024-12-06 09:45:02.431173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60098 ] 00:07:37.503 [2024-12-06 09:45:02.575559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.503 [2024-12-06 09:45:02.622116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.503 [2024-12-06 09:45:02.679065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.503 [2024-12-06 09:45:02.714926] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:37.503 [2024-12-06 09:45:02.714985] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:37.503 [2024-12-06 09:45:02.715006] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.763 [2024-12-06 09:45:02.834432] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.763 09:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:37.763 [2024-12-06 09:45:02.959696] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:37.763 [2024-12-06 09:45:02.959801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60113 ] 00:07:38.022 [2024-12-06 09:45:03.104066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.022 [2024-12-06 09:45:03.143445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.022 [2024-12-06 09:45:03.193972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.022 [2024-12-06 09:45:03.229130] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:38.022 [2024-12-06 09:45:03.229188] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:38.022 [2024-12-06 09:45:03.229208] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.282 [2024-12-06 09:45:03.342120] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:38.282 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.282 [2024-12-06 09:45:03.464393] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:38.282 [2024-12-06 09:45:03.464503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60115 ] 00:07:38.541 [2024-12-06 09:45:03.607881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.541 [2024-12-06 09:45:03.644270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.541 [2024-12-06 09:45:03.699841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.541  [2024-12-06T09:45:04.073Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.801 00:07:38.801 ************************************ 00:07:38.801 END TEST dd_flag_nofollow 00:07:38.801 ************************************ 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 7q4qmc5pro2p4xbd0xxc7wb593k5btlz6c02oajg8b251o2akyjl8r81gik9565yfpqkxw2qtd2li15wa3i1dqfg8nqd0eyhzm8vil5pszhl8gyknltzim9vqhnp0xba1dghs7f8z2t3fa9jckailnq8y0pgspdukjx1kj2dyabf6zxzxre73v2bjyrpwla82wu9hf0c75y3or6mjbqn1s937a6l6w8bbahw5qfdvgrsve7rtl0vc9lghj7195ezipmtps23c7915gdnzlrky1orvgd5v1b5snr8a0yzakuprzru9qy87fx2mdpzafhc0k8zizow0dpua1uh2vwk7w3v5r7vkh7fsm2zpugwwk3zmwvjyx10mls7j2gujm3n1knnvtfcf9nh6vtvr4ajshvdtjk1jy4ls4qhq6vqepm1pafgkv1azse4wkargo6pgzf9l3e9csrhjrny361hnog4x7bdskp34t42ou9eaqp43282y0bskb8tsiutv86k == \7\q\4\q\m\c\5\p\r\o\2\p\4\x\b\d\0\x\x\c\7\w\b\5\9\3\k\5\b\t\l\z\6\c\0\2\o\a\j\g\8\b\2\5\1\o\2\a\k\y\j\l\8\r\8\1\g\i\k\9\5\6\5\y\f\p\q\k\x\w\2\q\t\d\2\l\i\1\5\w\a\3\i\1\d\q\f\g\8\n\q\d\0\e\y\h\z\m\8\v\i\l\5\p\s\z\h\l\8\g\y\k\n\l\t\z\i\m\9\v\q\h\n\p\0\x\b\a\1\d\g\h\s\7\f\8\z\2\t\3\f\a\9\j\c\k\a\i\l\n\q\8\y\0\p\g\s\p\d\u\k\j\x\1\k\j\2\d\y\a\b\f\6\z\x\z\x\r\e\7\3\v\2\b\j\y\r\p\w\l\a\8\2\w\u\9\h\f\0\c\7\5\y\3\o\r\6\m\j\b\q\n\1\s\9\3\7\a\6\l\6\w\8\b\b\a\h\w\5\q\f\d\v\g\r\s\v\e\7\r\t\l\0\v\c\9\l\g\h\j\7\1\9\5\e\z\i\p\m\t\p\s\2\3\c\7\9\1\5\g\d\n\z\l\r\k\y\1\o\r\v\g\d\5\v\1\b\5\s\n\r\8\a\0\y\z\a\k\u\p\r\z\r\u\9\q\y\8\7\f\x\2\m\d\p\z\a\f\h\c\0\k\8\z\i\z\o\w\0\d\p\u\a\1\u\h\2\v\w\k\7\w\3\v\5\r\7\v\k\h\7\f\s\m\2\z\p\u\g\w\w\k\3\z\m\w\v\j\y\x\1\0\m\l\s\7\j\2\g\u\j\m\3\n\1\k\n\n\v\t\f\c\f\9\n\h\6\v\t\v\r\4\a\j\s\h\v\d\t\j\k\1\j\y\4\l\s\4\q\h\q\6\v\q\e\p\m\1\p\a\f\g\k\v\1\a\z\s\e\4\w\k\a\r\g\o\6\p\g\z\f\9\l\3\e\9\c\s\r\h\j\r\n\y\3\6\1\h\n\o\g\4\x\7\b\d\s\k\p\3\4\t\4\2\o\u\9\e\a\q\p\4\3\2\8\2\y\0\b\s\k\b\8\t\s\i\u\t\v\8\6\k ]] 00:07:38.801 00:07:38.801 real 0m1.546s 00:07:38.801 user 0m0.802s 00:07:38.801 sys 0m0.561s 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:38.801 ************************************ 00:07:38.801 START TEST dd_flag_noatime 00:07:38.801 ************************************ 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733478303 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733478303 00:07:38.801 09:45:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:39.738 09:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.996 [2024-12-06 09:45:05.049837] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:39.996 [2024-12-06 09:45:05.049954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:07:39.996 [2024-12-06 09:45:05.202244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.996 [2024-12-06 09:45:05.254879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.255 [2024-12-06 09:45:05.312400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.255  [2024-12-06T09:45:05.527Z] Copying: 512/512 [B] (average 500 kBps) 00:07:40.255 00:07:40.513 09:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.514 09:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733478303 )) 00:07:40.514 09:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.514 09:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733478303 )) 00:07:40.514 09:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.514 [2024-12-06 09:45:05.579993] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:40.514 [2024-12-06 09:45:05.580058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60171 ] 00:07:40.514 [2024-12-06 09:45:05.723317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.514 [2024-12-06 09:45:05.771268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.772 [2024-12-06 09:45:05.825043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.772  [2024-12-06T09:45:06.044Z] Copying: 512/512 [B] (average 500 kBps) 00:07:40.772 00:07:41.030 09:45:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.030 ************************************ 00:07:41.030 END TEST dd_flag_noatime 00:07:41.030 ************************************ 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733478305 )) 00:07:41.031 00:07:41.031 real 0m2.080s 00:07:41.031 user 0m0.563s 00:07:41.031 sys 0m0.564s 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:41.031 ************************************ 00:07:41.031 START TEST dd_flags_misc 00:07:41.031 ************************************ 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:41.031 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:41.031 [2024-12-06 09:45:06.162465] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:41.031 [2024-12-06 09:45:06.162750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:07:41.290 [2024-12-06 09:45:06.308099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.290 [2024-12-06 09:45:06.352458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.290 [2024-12-06 09:45:06.404869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.290  [2024-12-06T09:45:06.822Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.550 00:07:41.550 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4t4uyuhm5k5959ti7thtx8lnwbfi3j5oh0ws101w6k232m72b6lwnbtiv5nuifbeyq1nodx4owzkuzptepiye92e5rac2bt66ceknf6l3nwu7f5ini6i0zl222zsljla5mab512bhpe8f9lnqxhd4lqemdnthwzlt774kx5a4ujpavxydxlr7rcbf4z0gr2f9ct0qu9cv5agyzirttcbcr4vgadgcckjg9kyntre69bfrxgfkmo9wuf5bjzekm0956fb1y7t1ys9c2u7ymqds0qib86ts3cxj35gxtjtc0pkhhvi5nkgmdpuwfjqnmp8wrhoo9spttvyvqblenu8oll061yr1l1bj5ovof2kvfe7z9tp24lwdiu6dkhawzq3beltwar5sj0ho0nb31pm9jcmwqqctkgcb1ijd9skvn8d1doam50r5r1pyl2etgtucyfikvbckxbvmbdz792ibbm356bp96snxzzuhb6q6cptkvr44nw5jipmgwelm638 == \4\t\4\u\y\u\h\m\5\k\5\9\5\9\t\i\7\t\h\t\x\8\l\n\w\b\f\i\3\j\5\o\h\0\w\s\1\0\1\w\6\k\2\3\2\m\7\2\b\6\l\w\n\b\t\i\v\5\n\u\i\f\b\e\y\q\1\n\o\d\x\4\o\w\z\k\u\z\p\t\e\p\i\y\e\9\2\e\5\r\a\c\2\b\t\6\6\c\e\k\n\f\6\l\3\n\w\u\7\f\5\i\n\i\6\i\0\z\l\2\2\2\z\s\l\j\l\a\5\m\a\b\5\1\2\b\h\p\e\8\f\9\l\n\q\x\h\d\4\l\q\e\m\d\n\t\h\w\z\l\t\7\7\4\k\x\5\a\4\u\j\p\a\v\x\y\d\x\l\r\7\r\c\b\f\4\z\0\g\r\2\f\9\c\t\0\q\u\9\c\v\5\a\g\y\z\i\r\t\t\c\b\c\r\4\v\g\a\d\g\c\c\k\j\g\9\k\y\n\t\r\e\6\9\b\f\r\x\g\f\k\m\o\9\w\u\f\5\b\j\z\e\k\m\0\9\5\6\f\b\1\y\7\t\1\y\s\9\c\2\u\7\y\m\q\d\s\0\q\i\b\8\6\t\s\3\c\x\j\3\5\g\x\t\j\t\c\0\p\k\h\h\v\i\5\n\k\g\m\d\p\u\w\f\j\q\n\m\p\8\w\r\h\o\o\9\s\p\t\t\v\y\v\q\b\l\e\n\u\8\o\l\l\0\6\1\y\r\1\l\1\b\j\5\o\v\o\f\2\k\v\f\e\7\z\9\t\p\2\4\l\w\d\i\u\6\d\k\h\a\w\z\q\3\b\e\l\t\w\a\r\5\s\j\0\h\o\0\n\b\3\1\p\m\9\j\c\m\w\q\q\c\t\k\g\c\b\1\i\j\d\9\s\k\v\n\8\d\1\d\o\a\m\5\0\r\5\r\1\p\y\l\2\e\t\g\t\u\c\y\f\i\k\v\b\c\k\x\b\v\m\b\d\z\7\9\2\i\b\b\m\3\5\6\b\p\9\6\s\n\x\z\z\u\h\b\6\q\6\c\p\t\k\v\r\4\4\n\w\5\j\i\p\m\g\w\e\l\m\6\3\8 ]] 00:07:41.550 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:41.550 09:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:41.550 [2024-12-06 09:45:06.703623] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:41.550 [2024-12-06 09:45:06.703791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60209 ] 00:07:41.810 [2024-12-06 09:45:06.859074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.810 [2024-12-06 09:45:06.902994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.810 [2024-12-06 09:45:06.959723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.810  [2024-12-06T09:45:07.341Z] Copying: 512/512 [B] (average 500 kBps) 00:07:42.069 00:07:42.069 09:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4t4uyuhm5k5959ti7thtx8lnwbfi3j5oh0ws101w6k232m72b6lwnbtiv5nuifbeyq1nodx4owzkuzptepiye92e5rac2bt66ceknf6l3nwu7f5ini6i0zl222zsljla5mab512bhpe8f9lnqxhd4lqemdnthwzlt774kx5a4ujpavxydxlr7rcbf4z0gr2f9ct0qu9cv5agyzirttcbcr4vgadgcckjg9kyntre69bfrxgfkmo9wuf5bjzekm0956fb1y7t1ys9c2u7ymqds0qib86ts3cxj35gxtjtc0pkhhvi5nkgmdpuwfjqnmp8wrhoo9spttvyvqblenu8oll061yr1l1bj5ovof2kvfe7z9tp24lwdiu6dkhawzq3beltwar5sj0ho0nb31pm9jcmwqqctkgcb1ijd9skvn8d1doam50r5r1pyl2etgtucyfikvbckxbvmbdz792ibbm356bp96snxzzuhb6q6cptkvr44nw5jipmgwelm638 == \4\t\4\u\y\u\h\m\5\k\5\9\5\9\t\i\7\t\h\t\x\8\l\n\w\b\f\i\3\j\5\o\h\0\w\s\1\0\1\w\6\k\2\3\2\m\7\2\b\6\l\w\n\b\t\i\v\5\n\u\i\f\b\e\y\q\1\n\o\d\x\4\o\w\z\k\u\z\p\t\e\p\i\y\e\9\2\e\5\r\a\c\2\b\t\6\6\c\e\k\n\f\6\l\3\n\w\u\7\f\5\i\n\i\6\i\0\z\l\2\2\2\z\s\l\j\l\a\5\m\a\b\5\1\2\b\h\p\e\8\f\9\l\n\q\x\h\d\4\l\q\e\m\d\n\t\h\w\z\l\t\7\7\4\k\x\5\a\4\u\j\p\a\v\x\y\d\x\l\r\7\r\c\b\f\4\z\0\g\r\2\f\9\c\t\0\q\u\9\c\v\5\a\g\y\z\i\r\t\t\c\b\c\r\4\v\g\a\d\g\c\c\k\j\g\9\k\y\n\t\r\e\6\9\b\f\r\x\g\f\k\m\o\9\w\u\f\5\b\j\z\e\k\m\0\9\5\6\f\b\1\y\7\t\1\y\s\9\c\2\u\7\y\m\q\d\s\0\q\i\b\8\6\t\s\3\c\x\j\3\5\g\x\t\j\t\c\0\p\k\h\h\v\i\5\n\k\g\m\d\p\u\w\f\j\q\n\m\p\8\w\r\h\o\o\9\s\p\t\t\v\y\v\q\b\l\e\n\u\8\o\l\l\0\6\1\y\r\1\l\1\b\j\5\o\v\o\f\2\k\v\f\e\7\z\9\t\p\2\4\l\w\d\i\u\6\d\k\h\a\w\z\q\3\b\e\l\t\w\a\r\5\s\j\0\h\o\0\n\b\3\1\p\m\9\j\c\m\w\q\q\c\t\k\g\c\b\1\i\j\d\9\s\k\v\n\8\d\1\d\o\a\m\5\0\r\5\r\1\p\y\l\2\e\t\g\t\u\c\y\f\i\k\v\b\c\k\x\b\v\m\b\d\z\7\9\2\i\b\b\m\3\5\6\b\p\9\6\s\n\x\z\z\u\h\b\6\q\6\c\p\t\k\v\r\4\4\n\w\5\j\i\p\m\g\w\e\l\m\6\3\8 ]] 00:07:42.069 09:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.069 09:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:42.069 [2024-12-06 09:45:07.244431] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:42.069 [2024-12-06 09:45:07.244561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60224 ] 00:07:42.329 [2024-12-06 09:45:07.388121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.329 [2024-12-06 09:45:07.429394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.329 [2024-12-06 09:45:07.482864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.329  [2024-12-06T09:45:07.860Z] Copying: 512/512 [B] (average 250 kBps) 00:07:42.588 00:07:42.588 09:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4t4uyuhm5k5959ti7thtx8lnwbfi3j5oh0ws101w6k232m72b6lwnbtiv5nuifbeyq1nodx4owzkuzptepiye92e5rac2bt66ceknf6l3nwu7f5ini6i0zl222zsljla5mab512bhpe8f9lnqxhd4lqemdnthwzlt774kx5a4ujpavxydxlr7rcbf4z0gr2f9ct0qu9cv5agyzirttcbcr4vgadgcckjg9kyntre69bfrxgfkmo9wuf5bjzekm0956fb1y7t1ys9c2u7ymqds0qib86ts3cxj35gxtjtc0pkhhvi5nkgmdpuwfjqnmp8wrhoo9spttvyvqblenu8oll061yr1l1bj5ovof2kvfe7z9tp24lwdiu6dkhawzq3beltwar5sj0ho0nb31pm9jcmwqqctkgcb1ijd9skvn8d1doam50r5r1pyl2etgtucyfikvbckxbvmbdz792ibbm356bp96snxzzuhb6q6cptkvr44nw5jipmgwelm638 == \4\t\4\u\y\u\h\m\5\k\5\9\5\9\t\i\7\t\h\t\x\8\l\n\w\b\f\i\3\j\5\o\h\0\w\s\1\0\1\w\6\k\2\3\2\m\7\2\b\6\l\w\n\b\t\i\v\5\n\u\i\f\b\e\y\q\1\n\o\d\x\4\o\w\z\k\u\z\p\t\e\p\i\y\e\9\2\e\5\r\a\c\2\b\t\6\6\c\e\k\n\f\6\l\3\n\w\u\7\f\5\i\n\i\6\i\0\z\l\2\2\2\z\s\l\j\l\a\5\m\a\b\5\1\2\b\h\p\e\8\f\9\l\n\q\x\h\d\4\l\q\e\m\d\n\t\h\w\z\l\t\7\7\4\k\x\5\a\4\u\j\p\a\v\x\y\d\x\l\r\7\r\c\b\f\4\z\0\g\r\2\f\9\c\t\0\q\u\9\c\v\5\a\g\y\z\i\r\t\t\c\b\c\r\4\v\g\a\d\g\c\c\k\j\g\9\k\y\n\t\r\e\6\9\b\f\r\x\g\f\k\m\o\9\w\u\f\5\b\j\z\e\k\m\0\9\5\6\f\b\1\y\7\t\1\y\s\9\c\2\u\7\y\m\q\d\s\0\q\i\b\8\6\t\s\3\c\x\j\3\5\g\x\t\j\t\c\0\p\k\h\h\v\i\5\n\k\g\m\d\p\u\w\f\j\q\n\m\p\8\w\r\h\o\o\9\s\p\t\t\v\y\v\q\b\l\e\n\u\8\o\l\l\0\6\1\y\r\1\l\1\b\j\5\o\v\o\f\2\k\v\f\e\7\z\9\t\p\2\4\l\w\d\i\u\6\d\k\h\a\w\z\q\3\b\e\l\t\w\a\r\5\s\j\0\h\o\0\n\b\3\1\p\m\9\j\c\m\w\q\q\c\t\k\g\c\b\1\i\j\d\9\s\k\v\n\8\d\1\d\o\a\m\5\0\r\5\r\1\p\y\l\2\e\t\g\t\u\c\y\f\i\k\v\b\c\k\x\b\v\m\b\d\z\7\9\2\i\b\b\m\3\5\6\b\p\9\6\s\n\x\z\z\u\h\b\6\q\6\c\p\t\k\v\r\4\4\n\w\5\j\i\p\m\g\w\e\l\m\6\3\8 ]] 00:07:42.588 09:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.588 09:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:42.588 [2024-12-06 09:45:07.768145] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:42.588 [2024-12-06 09:45:07.768271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60230 ] 00:07:42.847 [2024-12-06 09:45:07.912879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.848 [2024-12-06 09:45:07.972480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.848 [2024-12-06 09:45:08.029202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.848  [2024-12-06T09:45:08.379Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.107 00:07:43.107 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4t4uyuhm5k5959ti7thtx8lnwbfi3j5oh0ws101w6k232m72b6lwnbtiv5nuifbeyq1nodx4owzkuzptepiye92e5rac2bt66ceknf6l3nwu7f5ini6i0zl222zsljla5mab512bhpe8f9lnqxhd4lqemdnthwzlt774kx5a4ujpavxydxlr7rcbf4z0gr2f9ct0qu9cv5agyzirttcbcr4vgadgcckjg9kyntre69bfrxgfkmo9wuf5bjzekm0956fb1y7t1ys9c2u7ymqds0qib86ts3cxj35gxtjtc0pkhhvi5nkgmdpuwfjqnmp8wrhoo9spttvyvqblenu8oll061yr1l1bj5ovof2kvfe7z9tp24lwdiu6dkhawzq3beltwar5sj0ho0nb31pm9jcmwqqctkgcb1ijd9skvn8d1doam50r5r1pyl2etgtucyfikvbckxbvmbdz792ibbm356bp96snxzzuhb6q6cptkvr44nw5jipmgwelm638 == \4\t\4\u\y\u\h\m\5\k\5\9\5\9\t\i\7\t\h\t\x\8\l\n\w\b\f\i\3\j\5\o\h\0\w\s\1\0\1\w\6\k\2\3\2\m\7\2\b\6\l\w\n\b\t\i\v\5\n\u\i\f\b\e\y\q\1\n\o\d\x\4\o\w\z\k\u\z\p\t\e\p\i\y\e\9\2\e\5\r\a\c\2\b\t\6\6\c\e\k\n\f\6\l\3\n\w\u\7\f\5\i\n\i\6\i\0\z\l\2\2\2\z\s\l\j\l\a\5\m\a\b\5\1\2\b\h\p\e\8\f\9\l\n\q\x\h\d\4\l\q\e\m\d\n\t\h\w\z\l\t\7\7\4\k\x\5\a\4\u\j\p\a\v\x\y\d\x\l\r\7\r\c\b\f\4\z\0\g\r\2\f\9\c\t\0\q\u\9\c\v\5\a\g\y\z\i\r\t\t\c\b\c\r\4\v\g\a\d\g\c\c\k\j\g\9\k\y\n\t\r\e\6\9\b\f\r\x\g\f\k\m\o\9\w\u\f\5\b\j\z\e\k\m\0\9\5\6\f\b\1\y\7\t\1\y\s\9\c\2\u\7\y\m\q\d\s\0\q\i\b\8\6\t\s\3\c\x\j\3\5\g\x\t\j\t\c\0\p\k\h\h\v\i\5\n\k\g\m\d\p\u\w\f\j\q\n\m\p\8\w\r\h\o\o\9\s\p\t\t\v\y\v\q\b\l\e\n\u\8\o\l\l\0\6\1\y\r\1\l\1\b\j\5\o\v\o\f\2\k\v\f\e\7\z\9\t\p\2\4\l\w\d\i\u\6\d\k\h\a\w\z\q\3\b\e\l\t\w\a\r\5\s\j\0\h\o\0\n\b\3\1\p\m\9\j\c\m\w\q\q\c\t\k\g\c\b\1\i\j\d\9\s\k\v\n\8\d\1\d\o\a\m\5\0\r\5\r\1\p\y\l\2\e\t\g\t\u\c\y\f\i\k\v\b\c\k\x\b\v\m\b\d\z\7\9\2\i\b\b\m\3\5\6\b\p\9\6\s\n\x\z\z\u\h\b\6\q\6\c\p\t\k\v\r\4\4\n\w\5\j\i\p\m\g\w\e\l\m\6\3\8 ]] 00:07:43.107 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:43.107 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:43.107 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:43.107 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.107 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:43.107 [2024-12-06 09:45:08.306834] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:43.107 [2024-12-06 09:45:08.306917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60245 ] 00:07:43.367 [2024-12-06 09:45:08.443978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.367 [2024-12-06 09:45:08.501296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.367 [2024-12-06 09:45:08.554862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.367  [2024-12-06T09:45:08.898Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.626 00:07:43.626 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x9jh95rr7l4qlw5wcketihi00j3yhrgpk0qqxznnsrg4gxuxb88rdtducox1glcmcez4n1wyw5zyqjtor3l75xf4d82w4khbp7df3ze3e0t47fuuk8d4u5h2qpgygjbq0cqoc2yrp0cz1go1poziuqxz5yr4dw1at0v151ttpdfi0rxe6wwkhtve04mwkk92hpi3oe9d2do3kz08t1zsyonaft9hn6rmkgwfxpcfs9ehw2rn96j99pz1usqkgs06b0ikhrng3rtcud86n1l8es35aes7z1g1cuaab653mr43w328hjs4cfyuoiwl36oatqruekln7w0ctpizmkte2lv8k3kqdzgh8rdc5u3g1znxiwha1kicx575qhdk9zifojxodvjobzxt1885eoutqfbin24y0ecobmg63fxtwjbrmz7mjvzl72kbikpn6no4i9qkahz571ilxu1upzfh1fnkk90ic5rc7pes5rop579h7i4e16x9t38xu4byynhd == \x\9\j\h\9\5\r\r\7\l\4\q\l\w\5\w\c\k\e\t\i\h\i\0\0\j\3\y\h\r\g\p\k\0\q\q\x\z\n\n\s\r\g\4\g\x\u\x\b\8\8\r\d\t\d\u\c\o\x\1\g\l\c\m\c\e\z\4\n\1\w\y\w\5\z\y\q\j\t\o\r\3\l\7\5\x\f\4\d\8\2\w\4\k\h\b\p\7\d\f\3\z\e\3\e\0\t\4\7\f\u\u\k\8\d\4\u\5\h\2\q\p\g\y\g\j\b\q\0\c\q\o\c\2\y\r\p\0\c\z\1\g\o\1\p\o\z\i\u\q\x\z\5\y\r\4\d\w\1\a\t\0\v\1\5\1\t\t\p\d\f\i\0\r\x\e\6\w\w\k\h\t\v\e\0\4\m\w\k\k\9\2\h\p\i\3\o\e\9\d\2\d\o\3\k\z\0\8\t\1\z\s\y\o\n\a\f\t\9\h\n\6\r\m\k\g\w\f\x\p\c\f\s\9\e\h\w\2\r\n\9\6\j\9\9\p\z\1\u\s\q\k\g\s\0\6\b\0\i\k\h\r\n\g\3\r\t\c\u\d\8\6\n\1\l\8\e\s\3\5\a\e\s\7\z\1\g\1\c\u\a\a\b\6\5\3\m\r\4\3\w\3\2\8\h\j\s\4\c\f\y\u\o\i\w\l\3\6\o\a\t\q\r\u\e\k\l\n\7\w\0\c\t\p\i\z\m\k\t\e\2\l\v\8\k\3\k\q\d\z\g\h\8\r\d\c\5\u\3\g\1\z\n\x\i\w\h\a\1\k\i\c\x\5\7\5\q\h\d\k\9\z\i\f\o\j\x\o\d\v\j\o\b\z\x\t\1\8\8\5\e\o\u\t\q\f\b\i\n\2\4\y\0\e\c\o\b\m\g\6\3\f\x\t\w\j\b\r\m\z\7\m\j\v\z\l\7\2\k\b\i\k\p\n\6\n\o\4\i\9\q\k\a\h\z\5\7\1\i\l\x\u\1\u\p\z\f\h\1\f\n\k\k\9\0\i\c\5\r\c\7\p\e\s\5\r\o\p\5\7\9\h\7\i\4\e\1\6\x\9\t\3\8\x\u\4\b\y\y\n\h\d ]] 00:07:43.626 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.626 09:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:43.626 [2024-12-06 09:45:08.828212] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:43.626 [2024-12-06 09:45:08.828320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60249 ] 00:07:43.885 [2024-12-06 09:45:08.976364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.885 [2024-12-06 09:45:09.027125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.885 [2024-12-06 09:45:09.084339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.885  [2024-12-06T09:45:09.415Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.143 00:07:44.143 09:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x9jh95rr7l4qlw5wcketihi00j3yhrgpk0qqxznnsrg4gxuxb88rdtducox1glcmcez4n1wyw5zyqjtor3l75xf4d82w4khbp7df3ze3e0t47fuuk8d4u5h2qpgygjbq0cqoc2yrp0cz1go1poziuqxz5yr4dw1at0v151ttpdfi0rxe6wwkhtve04mwkk92hpi3oe9d2do3kz08t1zsyonaft9hn6rmkgwfxpcfs9ehw2rn96j99pz1usqkgs06b0ikhrng3rtcud86n1l8es35aes7z1g1cuaab653mr43w328hjs4cfyuoiwl36oatqruekln7w0ctpizmkte2lv8k3kqdzgh8rdc5u3g1znxiwha1kicx575qhdk9zifojxodvjobzxt1885eoutqfbin24y0ecobmg63fxtwjbrmz7mjvzl72kbikpn6no4i9qkahz571ilxu1upzfh1fnkk90ic5rc7pes5rop579h7i4e16x9t38xu4byynhd == \x\9\j\h\9\5\r\r\7\l\4\q\l\w\5\w\c\k\e\t\i\h\i\0\0\j\3\y\h\r\g\p\k\0\q\q\x\z\n\n\s\r\g\4\g\x\u\x\b\8\8\r\d\t\d\u\c\o\x\1\g\l\c\m\c\e\z\4\n\1\w\y\w\5\z\y\q\j\t\o\r\3\l\7\5\x\f\4\d\8\2\w\4\k\h\b\p\7\d\f\3\z\e\3\e\0\t\4\7\f\u\u\k\8\d\4\u\5\h\2\q\p\g\y\g\j\b\q\0\c\q\o\c\2\y\r\p\0\c\z\1\g\o\1\p\o\z\i\u\q\x\z\5\y\r\4\d\w\1\a\t\0\v\1\5\1\t\t\p\d\f\i\0\r\x\e\6\w\w\k\h\t\v\e\0\4\m\w\k\k\9\2\h\p\i\3\o\e\9\d\2\d\o\3\k\z\0\8\t\1\z\s\y\o\n\a\f\t\9\h\n\6\r\m\k\g\w\f\x\p\c\f\s\9\e\h\w\2\r\n\9\6\j\9\9\p\z\1\u\s\q\k\g\s\0\6\b\0\i\k\h\r\n\g\3\r\t\c\u\d\8\6\n\1\l\8\e\s\3\5\a\e\s\7\z\1\g\1\c\u\a\a\b\6\5\3\m\r\4\3\w\3\2\8\h\j\s\4\c\f\y\u\o\i\w\l\3\6\o\a\t\q\r\u\e\k\l\n\7\w\0\c\t\p\i\z\m\k\t\e\2\l\v\8\k\3\k\q\d\z\g\h\8\r\d\c\5\u\3\g\1\z\n\x\i\w\h\a\1\k\i\c\x\5\7\5\q\h\d\k\9\z\i\f\o\j\x\o\d\v\j\o\b\z\x\t\1\8\8\5\e\o\u\t\q\f\b\i\n\2\4\y\0\e\c\o\b\m\g\6\3\f\x\t\w\j\b\r\m\z\7\m\j\v\z\l\7\2\k\b\i\k\p\n\6\n\o\4\i\9\q\k\a\h\z\5\7\1\i\l\x\u\1\u\p\z\f\h\1\f\n\k\k\9\0\i\c\5\r\c\7\p\e\s\5\r\o\p\5\7\9\h\7\i\4\e\1\6\x\9\t\3\8\x\u\4\b\y\y\n\h\d ]] 00:07:44.143 09:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.143 09:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:44.143 [2024-12-06 09:45:09.386888] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:44.143 [2024-12-06 09:45:09.387071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60264 ] 00:07:44.402 [2024-12-06 09:45:09.538771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.402 [2024-12-06 09:45:09.579779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.402 [2024-12-06 09:45:09.633602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.661  [2024-12-06T09:45:09.933Z] Copying: 512/512 [B] (average 250 kBps) 00:07:44.661 00:07:44.661 09:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x9jh95rr7l4qlw5wcketihi00j3yhrgpk0qqxznnsrg4gxuxb88rdtducox1glcmcez4n1wyw5zyqjtor3l75xf4d82w4khbp7df3ze3e0t47fuuk8d4u5h2qpgygjbq0cqoc2yrp0cz1go1poziuqxz5yr4dw1at0v151ttpdfi0rxe6wwkhtve04mwkk92hpi3oe9d2do3kz08t1zsyonaft9hn6rmkgwfxpcfs9ehw2rn96j99pz1usqkgs06b0ikhrng3rtcud86n1l8es35aes7z1g1cuaab653mr43w328hjs4cfyuoiwl36oatqruekln7w0ctpizmkte2lv8k3kqdzgh8rdc5u3g1znxiwha1kicx575qhdk9zifojxodvjobzxt1885eoutqfbin24y0ecobmg63fxtwjbrmz7mjvzl72kbikpn6no4i9qkahz571ilxu1upzfh1fnkk90ic5rc7pes5rop579h7i4e16x9t38xu4byynhd == \x\9\j\h\9\5\r\r\7\l\4\q\l\w\5\w\c\k\e\t\i\h\i\0\0\j\3\y\h\r\g\p\k\0\q\q\x\z\n\n\s\r\g\4\g\x\u\x\b\8\8\r\d\t\d\u\c\o\x\1\g\l\c\m\c\e\z\4\n\1\w\y\w\5\z\y\q\j\t\o\r\3\l\7\5\x\f\4\d\8\2\w\4\k\h\b\p\7\d\f\3\z\e\3\e\0\t\4\7\f\u\u\k\8\d\4\u\5\h\2\q\p\g\y\g\j\b\q\0\c\q\o\c\2\y\r\p\0\c\z\1\g\o\1\p\o\z\i\u\q\x\z\5\y\r\4\d\w\1\a\t\0\v\1\5\1\t\t\p\d\f\i\0\r\x\e\6\w\w\k\h\t\v\e\0\4\m\w\k\k\9\2\h\p\i\3\o\e\9\d\2\d\o\3\k\z\0\8\t\1\z\s\y\o\n\a\f\t\9\h\n\6\r\m\k\g\w\f\x\p\c\f\s\9\e\h\w\2\r\n\9\6\j\9\9\p\z\1\u\s\q\k\g\s\0\6\b\0\i\k\h\r\n\g\3\r\t\c\u\d\8\6\n\1\l\8\e\s\3\5\a\e\s\7\z\1\g\1\c\u\a\a\b\6\5\3\m\r\4\3\w\3\2\8\h\j\s\4\c\f\y\u\o\i\w\l\3\6\o\a\t\q\r\u\e\k\l\n\7\w\0\c\t\p\i\z\m\k\t\e\2\l\v\8\k\3\k\q\d\z\g\h\8\r\d\c\5\u\3\g\1\z\n\x\i\w\h\a\1\k\i\c\x\5\7\5\q\h\d\k\9\z\i\f\o\j\x\o\d\v\j\o\b\z\x\t\1\8\8\5\e\o\u\t\q\f\b\i\n\2\4\y\0\e\c\o\b\m\g\6\3\f\x\t\w\j\b\r\m\z\7\m\j\v\z\l\7\2\k\b\i\k\p\n\6\n\o\4\i\9\q\k\a\h\z\5\7\1\i\l\x\u\1\u\p\z\f\h\1\f\n\k\k\9\0\i\c\5\r\c\7\p\e\s\5\r\o\p\5\7\9\h\7\i\4\e\1\6\x\9\t\3\8\x\u\4\b\y\y\n\h\d ]] 00:07:44.661 09:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.661 09:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:44.661 [2024-12-06 09:45:09.927112] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:44.661 [2024-12-06 09:45:09.927239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60268 ] 00:07:44.919 [2024-12-06 09:45:10.076040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.919 [2024-12-06 09:45:10.139687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.176 [2024-12-06 09:45:10.194757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.176  [2024-12-06T09:45:10.448Z] Copying: 512/512 [B] (average 250 kBps) 00:07:45.176 00:07:45.176 09:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x9jh95rr7l4qlw5wcketihi00j3yhrgpk0qqxznnsrg4gxuxb88rdtducox1glcmcez4n1wyw5zyqjtor3l75xf4d82w4khbp7df3ze3e0t47fuuk8d4u5h2qpgygjbq0cqoc2yrp0cz1go1poziuqxz5yr4dw1at0v151ttpdfi0rxe6wwkhtve04mwkk92hpi3oe9d2do3kz08t1zsyonaft9hn6rmkgwfxpcfs9ehw2rn96j99pz1usqkgs06b0ikhrng3rtcud86n1l8es35aes7z1g1cuaab653mr43w328hjs4cfyuoiwl36oatqruekln7w0ctpizmkte2lv8k3kqdzgh8rdc5u3g1znxiwha1kicx575qhdk9zifojxodvjobzxt1885eoutqfbin24y0ecobmg63fxtwjbrmz7mjvzl72kbikpn6no4i9qkahz571ilxu1upzfh1fnkk90ic5rc7pes5rop579h7i4e16x9t38xu4byynhd == \x\9\j\h\9\5\r\r\7\l\4\q\l\w\5\w\c\k\e\t\i\h\i\0\0\j\3\y\h\r\g\p\k\0\q\q\x\z\n\n\s\r\g\4\g\x\u\x\b\8\8\r\d\t\d\u\c\o\x\1\g\l\c\m\c\e\z\4\n\1\w\y\w\5\z\y\q\j\t\o\r\3\l\7\5\x\f\4\d\8\2\w\4\k\h\b\p\7\d\f\3\z\e\3\e\0\t\4\7\f\u\u\k\8\d\4\u\5\h\2\q\p\g\y\g\j\b\q\0\c\q\o\c\2\y\r\p\0\c\z\1\g\o\1\p\o\z\i\u\q\x\z\5\y\r\4\d\w\1\a\t\0\v\1\5\1\t\t\p\d\f\i\0\r\x\e\6\w\w\k\h\t\v\e\0\4\m\w\k\k\9\2\h\p\i\3\o\e\9\d\2\d\o\3\k\z\0\8\t\1\z\s\y\o\n\a\f\t\9\h\n\6\r\m\k\g\w\f\x\p\c\f\s\9\e\h\w\2\r\n\9\6\j\9\9\p\z\1\u\s\q\k\g\s\0\6\b\0\i\k\h\r\n\g\3\r\t\c\u\d\8\6\n\1\l\8\e\s\3\5\a\e\s\7\z\1\g\1\c\u\a\a\b\6\5\3\m\r\4\3\w\3\2\8\h\j\s\4\c\f\y\u\o\i\w\l\3\6\o\a\t\q\r\u\e\k\l\n\7\w\0\c\t\p\i\z\m\k\t\e\2\l\v\8\k\3\k\q\d\z\g\h\8\r\d\c\5\u\3\g\1\z\n\x\i\w\h\a\1\k\i\c\x\5\7\5\q\h\d\k\9\z\i\f\o\j\x\o\d\v\j\o\b\z\x\t\1\8\8\5\e\o\u\t\q\f\b\i\n\2\4\y\0\e\c\o\b\m\g\6\3\f\x\t\w\j\b\r\m\z\7\m\j\v\z\l\7\2\k\b\i\k\p\n\6\n\o\4\i\9\q\k\a\h\z\5\7\1\i\l\x\u\1\u\p\z\f\h\1\f\n\k\k\9\0\i\c\5\r\c\7\p\e\s\5\r\o\p\5\7\9\h\7\i\4\e\1\6\x\9\t\3\8\x\u\4\b\y\y\n\h\d ]] 00:07:45.176 00:07:45.176 real 0m4.317s 00:07:45.176 user 0m2.318s 00:07:45.176 sys 0m2.208s 00:07:45.176 09:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.176 09:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:45.176 ************************************ 00:07:45.176 END TEST dd_flags_misc 00:07:45.176 ************************************ 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:45.434 * Second test run, disabling liburing, forcing AIO 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:45.434 ************************************ 00:07:45.434 START TEST dd_flag_append_forced_aio 00:07:45.434 ************************************ 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=q366du3dkn8r9fcnk9jn2yhyzvkdg8ga 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=n9nx8tjyjgf8jrufym4us7ve2salz81g 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s q366du3dkn8r9fcnk9jn2yhyzvkdg8ga 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s n9nx8tjyjgf8jrufym4us7ve2salz81g 00:07:45.434 09:45:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:45.434 [2024-12-06 09:45:10.567840] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:45.434 [2024-12-06 09:45:10.568012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:07:45.692 [2024-12-06 09:45:10.721625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.692 [2024-12-06 09:45:10.768951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.692 [2024-12-06 09:45:10.822089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.692  [2024-12-06T09:45:11.223Z] Copying: 32/32 [B] (average 31 kBps) 00:07:45.951 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ n9nx8tjyjgf8jrufym4us7ve2salz81gq366du3dkn8r9fcnk9jn2yhyzvkdg8ga == \n\9\n\x\8\t\j\y\j\g\f\8\j\r\u\f\y\m\4\u\s\7\v\e\2\s\a\l\z\8\1\g\q\3\6\6\d\u\3\d\k\n\8\r\9\f\c\n\k\9\j\n\2\y\h\y\z\v\k\d\g\8\g\a ]] 00:07:45.951 00:07:45.951 real 0m0.585s 00:07:45.951 user 0m0.309s 00:07:45.951 sys 0m0.159s 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:45.951 ************************************ 00:07:45.951 END TEST dd_flag_append_forced_aio 00:07:45.951 ************************************ 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:45.951 ************************************ 00:07:45.951 START TEST dd_flag_directory_forced_aio 00:07:45.951 ************************************ 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.951 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.952 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.952 [2024-12-06 09:45:11.188774] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:45.952 [2024-12-06 09:45:11.188884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:07:46.210 [2024-12-06 09:45:11.334440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.210 [2024-12-06 09:45:11.381967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.210 [2024-12-06 09:45:11.434387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.210 [2024-12-06 09:45:11.468205] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:46.210 [2024-12-06 09:45:11.468272] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:46.210 [2024-12-06 09:45:11.468284] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.468 [2024-12-06 09:45:11.581091] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.468 09:45:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:46.468 [2024-12-06 09:45:11.681664] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:46.468 [2024-12-06 09:45:11.681763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:07:46.727 [2024-12-06 09:45:11.811430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.727 [2024-12-06 09:45:11.855411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.727 [2024-12-06 09:45:11.911632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.727 [2024-12-06 09:45:11.948895] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:46.727 [2024-12-06 09:45:11.948962] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:46.727 [2024-12-06 09:45:11.948978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.985 [2024-12-06 09:45:12.069423] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.985 00:07:46.985 real 0m1.015s 00:07:46.985 user 0m0.534s 00:07:46.985 sys 0m0.271s 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:46.985 ************************************ 00:07:46.985 END TEST dd_flag_directory_forced_aio 00:07:46.985 ************************************ 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:46.985 ************************************ 00:07:46.985 START TEST dd_flag_nofollow_forced_aio 00:07:46.985 ************************************ 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.985 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.243 [2024-12-06 09:45:12.255267] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:47.243 [2024-12-06 09:45:12.255380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60371 ] 00:07:47.243 [2024-12-06 09:45:12.402218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.243 [2024-12-06 09:45:12.451277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.243 [2024-12-06 09:45:12.505131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.501 [2024-12-06 09:45:12.542001] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:47.501 [2024-12-06 09:45:12.542055] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:47.501 [2024-12-06 09:45:12.542077] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.501 [2024-12-06 09:45:12.661093] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.501 09:45:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:47.501 [2024-12-06 09:45:12.762803] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:47.501 [2024-12-06 09:45:12.762884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:07:47.759 [2024-12-06 09:45:12.901553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.760 [2024-12-06 09:45:12.946510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.760 [2024-12-06 09:45:12.998866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.018 [2024-12-06 09:45:13.035239] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:48.018 [2024-12-06 09:45:13.035290] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:48.018 [2024-12-06 09:45:13.035313] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.018 [2024-12-06 09:45:13.151080] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:48.018 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.018 [2024-12-06 09:45:13.274120] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:48.018 [2024-12-06 09:45:13.274248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:07:48.277 [2024-12-06 09:45:13.417832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.277 [2024-12-06 09:45:13.471361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.277 [2024-12-06 09:45:13.523776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.537  [2024-12-06T09:45:13.809Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.537 00:07:48.537 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 7y91t39qkygoczslgwdz7bc9uy6fhe3z6yxjd3aofgjc4hjbh0z18fey51a0l8bhpuz8529dey1dm9evrtjif5w77xman81x1w77nrnghnhdt4grvietiyuroowr3ddhtw80keihrt2i72qqhfnpcfvf5a70v0de46z4y68liqz09mxig9p1hycg3u4tvatg357cecd0nhysh37ndjg6x3kbausvuwl2vcybs7sjjqwrqoiz3gr95ur6jkxqd5lnyj3wsaiznryj5qw1zsv338gifjwysvfj6lvd6h9ooj1bhaa2ly3yikerqnld65mu8qte4mpo5yi8ajtpyjone1yvi6h4borfvckjm9qv6w213y6aleuv4onyvmwy802yipb068to4oqg330zclrs1a4pq2250rzyfsvatj704vuqnd763jv6q1cfvl5uylifh6i6fp4idt96lwr9hrz957fnzcgzz60kdfo1ayqu944f5w7s6yxlr9jd72xddtq8 == \7\y\9\1\t\3\9\q\k\y\g\o\c\z\s\l\g\w\d\z\7\b\c\9\u\y\6\f\h\e\3\z\6\y\x\j\d\3\a\o\f\g\j\c\4\h\j\b\h\0\z\1\8\f\e\y\5\1\a\0\l\8\b\h\p\u\z\8\5\2\9\d\e\y\1\d\m\9\e\v\r\t\j\i\f\5\w\7\7\x\m\a\n\8\1\x\1\w\7\7\n\r\n\g\h\n\h\d\t\4\g\r\v\i\e\t\i\y\u\r\o\o\w\r\3\d\d\h\t\w\8\0\k\e\i\h\r\t\2\i\7\2\q\q\h\f\n\p\c\f\v\f\5\a\7\0\v\0\d\e\4\6\z\4\y\6\8\l\i\q\z\0\9\m\x\i\g\9\p\1\h\y\c\g\3\u\4\t\v\a\t\g\3\5\7\c\e\c\d\0\n\h\y\s\h\3\7\n\d\j\g\6\x\3\k\b\a\u\s\v\u\w\l\2\v\c\y\b\s\7\s\j\j\q\w\r\q\o\i\z\3\g\r\9\5\u\r\6\j\k\x\q\d\5\l\n\y\j\3\w\s\a\i\z\n\r\y\j\5\q\w\1\z\s\v\3\3\8\g\i\f\j\w\y\s\v\f\j\6\l\v\d\6\h\9\o\o\j\1\b\h\a\a\2\l\y\3\y\i\k\e\r\q\n\l\d\6\5\m\u\8\q\t\e\4\m\p\o\5\y\i\8\a\j\t\p\y\j\o\n\e\1\y\v\i\6\h\4\b\o\r\f\v\c\k\j\m\9\q\v\6\w\2\1\3\y\6\a\l\e\u\v\4\o\n\y\v\m\w\y\8\0\2\y\i\p\b\0\6\8\t\o\4\o\q\g\3\3\0\z\c\l\r\s\1\a\4\p\q\2\2\5\0\r\z\y\f\s\v\a\t\j\7\0\4\v\u\q\n\d\7\6\3\j\v\6\q\1\c\f\v\l\5\u\y\l\i\f\h\6\i\6\f\p\4\i\d\t\9\6\l\w\r\9\h\r\z\9\5\7\f\n\z\c\g\z\z\6\0\k\d\f\o\1\a\y\q\u\9\4\4\f\5\w\7\s\6\y\x\l\r\9\j\d\7\2\x\d\d\t\q\8 ]] 00:07:48.537 00:07:48.537 real 0m1.580s 00:07:48.537 user 0m0.821s 00:07:48.537 sys 0m0.423s 00:07:48.537 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.537 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:48.537 ************************************ 00:07:48.537 END TEST dd_flag_nofollow_forced_aio 00:07:48.537 ************************************ 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:48.796 ************************************ 00:07:48.796 START TEST dd_flag_noatime_forced_aio 00:07:48.796 ************************************ 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733478313 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733478313 00:07:48.796 09:45:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:49.734 09:45:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.734 [2024-12-06 09:45:14.905851] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:49.734 [2024-12-06 09:45:14.905958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60424 ] 00:07:50.004 [2024-12-06 09:45:15.058504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.004 [2024-12-06 09:45:15.114495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.004 [2024-12-06 09:45:15.175052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.004  [2024-12-06T09:45:15.539Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.267 00:07:50.267 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.267 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733478313 )) 00:07:50.267 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.268 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733478313 )) 00:07:50.268 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.268 [2024-12-06 09:45:15.497198] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:50.268 [2024-12-06 09:45:15.497335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60441 ] 00:07:50.526 [2024-12-06 09:45:15.644625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.526 [2024-12-06 09:45:15.690744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.526 [2024-12-06 09:45:15.744547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.526  [2024-12-06T09:45:16.057Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.785 00:07:50.785 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.785 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733478315 )) 00:07:50.785 00:07:50.785 real 0m2.167s 00:07:50.785 user 0m0.590s 00:07:50.786 sys 0m0.332s 00:07:50.786 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.786 ************************************ 00:07:50.786 END TEST dd_flag_noatime_forced_aio 00:07:50.786 09:45:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:50.786 ************************************ 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.786 ************************************ 00:07:50.786 START TEST dd_flags_misc_forced_aio 00:07:50.786 ************************************ 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:50.786 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:51.045 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.045 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:51.045 [2024-12-06 09:45:16.099317] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:51.045 [2024-12-06 09:45:16.099392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:07:51.045 [2024-12-06 09:45:16.234310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.045 [2024-12-06 09:45:16.276936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.304 [2024-12-06 09:45:16.328408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.304  [2024-12-06T09:45:16.576Z] Copying: 512/512 [B] (average 500 kBps) 00:07:51.304 00:07:51.304 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vkrdkpca51twscbac2lvc701tcziqwbjbl990y2mugzqhd62qzqwdoljvjqyv2dsxyudv1udjaeogx6deo5dhosw8ldyrb5qr3uv5i257k6gzxozqn4a2vx86vrllsi3sj3lo265u7hi1te29pobv032fjuh6ln2olddsqk87u8a5r8tdp4qhsuyghp4lp57ty5ycffmv7a9rd3pkr6u2b19poa9ybm6ho0l6u5ew2fz2pk0ucs69o8qbocsnhainu9j7yoa7ug40klqnl3nye76pukmyrlsbmvbq4ab76gsdagoxcz1ae09og4zvpsteaipendhyvacm8u674pc35vdbl4v70p2ahhzf5geyxuwc9hbthqv7u6qinde60wghjcrt5waeg8p6izcpklkalpuv6s91ejoho2we76ork5mjtw1dlwehiwcvj0z7f7xsp55wxb7o1dty4go18lzi4m8mme5bf9yeztt7x089sjm39va11af4511jgl75d6h == \v\k\r\d\k\p\c\a\5\1\t\w\s\c\b\a\c\2\l\v\c\7\0\1\t\c\z\i\q\w\b\j\b\l\9\9\0\y\2\m\u\g\z\q\h\d\6\2\q\z\q\w\d\o\l\j\v\j\q\y\v\2\d\s\x\y\u\d\v\1\u\d\j\a\e\o\g\x\6\d\e\o\5\d\h\o\s\w\8\l\d\y\r\b\5\q\r\3\u\v\5\i\2\5\7\k\6\g\z\x\o\z\q\n\4\a\2\v\x\8\6\v\r\l\l\s\i\3\s\j\3\l\o\2\6\5\u\7\h\i\1\t\e\2\9\p\o\b\v\0\3\2\f\j\u\h\6\l\n\2\o\l\d\d\s\q\k\8\7\u\8\a\5\r\8\t\d\p\4\q\h\s\u\y\g\h\p\4\l\p\5\7\t\y\5\y\c\f\f\m\v\7\a\9\r\d\3\p\k\r\6\u\2\b\1\9\p\o\a\9\y\b\m\6\h\o\0\l\6\u\5\e\w\2\f\z\2\p\k\0\u\c\s\6\9\o\8\q\b\o\c\s\n\h\a\i\n\u\9\j\7\y\o\a\7\u\g\4\0\k\l\q\n\l\3\n\y\e\7\6\p\u\k\m\y\r\l\s\b\m\v\b\q\4\a\b\7\6\g\s\d\a\g\o\x\c\z\1\a\e\0\9\o\g\4\z\v\p\s\t\e\a\i\p\e\n\d\h\y\v\a\c\m\8\u\6\7\4\p\c\3\5\v\d\b\l\4\v\7\0\p\2\a\h\h\z\f\5\g\e\y\x\u\w\c\9\h\b\t\h\q\v\7\u\6\q\i\n\d\e\6\0\w\g\h\j\c\r\t\5\w\a\e\g\8\p\6\i\z\c\p\k\l\k\a\l\p\u\v\6\s\9\1\e\j\o\h\o\2\w\e\7\6\o\r\k\5\m\j\t\w\1\d\l\w\e\h\i\w\c\v\j\0\z\7\f\7\x\s\p\5\5\w\x\b\7\o\1\d\t\y\4\g\o\1\8\l\z\i\4\m\8\m\m\e\5\b\f\9\y\e\z\t\t\7\x\0\8\9\s\j\m\3\9\v\a\1\1\a\f\4\5\1\1\j\g\l\7\5\d\6\h ]] 00:07:51.304 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.304 09:45:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:51.564 [2024-12-06 09:45:16.620555] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:51.564 [2024-12-06 09:45:16.620711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60476 ] 00:07:51.564 [2024-12-06 09:45:16.768410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.564 [2024-12-06 09:45:16.823551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.823 [2024-12-06 09:45:16.887307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.823  [2024-12-06T09:45:17.355Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.083 00:07:52.083 09:45:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vkrdkpca51twscbac2lvc701tcziqwbjbl990y2mugzqhd62qzqwdoljvjqyv2dsxyudv1udjaeogx6deo5dhosw8ldyrb5qr3uv5i257k6gzxozqn4a2vx86vrllsi3sj3lo265u7hi1te29pobv032fjuh6ln2olddsqk87u8a5r8tdp4qhsuyghp4lp57ty5ycffmv7a9rd3pkr6u2b19poa9ybm6ho0l6u5ew2fz2pk0ucs69o8qbocsnhainu9j7yoa7ug40klqnl3nye76pukmyrlsbmvbq4ab76gsdagoxcz1ae09og4zvpsteaipendhyvacm8u674pc35vdbl4v70p2ahhzf5geyxuwc9hbthqv7u6qinde60wghjcrt5waeg8p6izcpklkalpuv6s91ejoho2we76ork5mjtw1dlwehiwcvj0z7f7xsp55wxb7o1dty4go18lzi4m8mme5bf9yeztt7x089sjm39va11af4511jgl75d6h == \v\k\r\d\k\p\c\a\5\1\t\w\s\c\b\a\c\2\l\v\c\7\0\1\t\c\z\i\q\w\b\j\b\l\9\9\0\y\2\m\u\g\z\q\h\d\6\2\q\z\q\w\d\o\l\j\v\j\q\y\v\2\d\s\x\y\u\d\v\1\u\d\j\a\e\o\g\x\6\d\e\o\5\d\h\o\s\w\8\l\d\y\r\b\5\q\r\3\u\v\5\i\2\5\7\k\6\g\z\x\o\z\q\n\4\a\2\v\x\8\6\v\r\l\l\s\i\3\s\j\3\l\o\2\6\5\u\7\h\i\1\t\e\2\9\p\o\b\v\0\3\2\f\j\u\h\6\l\n\2\o\l\d\d\s\q\k\8\7\u\8\a\5\r\8\t\d\p\4\q\h\s\u\y\g\h\p\4\l\p\5\7\t\y\5\y\c\f\f\m\v\7\a\9\r\d\3\p\k\r\6\u\2\b\1\9\p\o\a\9\y\b\m\6\h\o\0\l\6\u\5\e\w\2\f\z\2\p\k\0\u\c\s\6\9\o\8\q\b\o\c\s\n\h\a\i\n\u\9\j\7\y\o\a\7\u\g\4\0\k\l\q\n\l\3\n\y\e\7\6\p\u\k\m\y\r\l\s\b\m\v\b\q\4\a\b\7\6\g\s\d\a\g\o\x\c\z\1\a\e\0\9\o\g\4\z\v\p\s\t\e\a\i\p\e\n\d\h\y\v\a\c\m\8\u\6\7\4\p\c\3\5\v\d\b\l\4\v\7\0\p\2\a\h\h\z\f\5\g\e\y\x\u\w\c\9\h\b\t\h\q\v\7\u\6\q\i\n\d\e\6\0\w\g\h\j\c\r\t\5\w\a\e\g\8\p\6\i\z\c\p\k\l\k\a\l\p\u\v\6\s\9\1\e\j\o\h\o\2\w\e\7\6\o\r\k\5\m\j\t\w\1\d\l\w\e\h\i\w\c\v\j\0\z\7\f\7\x\s\p\5\5\w\x\b\7\o\1\d\t\y\4\g\o\1\8\l\z\i\4\m\8\m\m\e\5\b\f\9\y\e\z\t\t\7\x\0\8\9\s\j\m\3\9\v\a\1\1\a\f\4\5\1\1\j\g\l\7\5\d\6\h ]] 00:07:52.083 09:45:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.083 09:45:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:52.083 [2024-12-06 09:45:17.215660] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:52.083 [2024-12-06 09:45:17.215746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60479 ] 00:07:52.343 [2024-12-06 09:45:17.366734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.343 [2024-12-06 09:45:17.425392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.343 [2024-12-06 09:45:17.486305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.343  [2024-12-06T09:45:17.874Z] Copying: 512/512 [B] (average 166 kBps) 00:07:52.603 00:07:52.603 09:45:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vkrdkpca51twscbac2lvc701tcziqwbjbl990y2mugzqhd62qzqwdoljvjqyv2dsxyudv1udjaeogx6deo5dhosw8ldyrb5qr3uv5i257k6gzxozqn4a2vx86vrllsi3sj3lo265u7hi1te29pobv032fjuh6ln2olddsqk87u8a5r8tdp4qhsuyghp4lp57ty5ycffmv7a9rd3pkr6u2b19poa9ybm6ho0l6u5ew2fz2pk0ucs69o8qbocsnhainu9j7yoa7ug40klqnl3nye76pukmyrlsbmvbq4ab76gsdagoxcz1ae09og4zvpsteaipendhyvacm8u674pc35vdbl4v70p2ahhzf5geyxuwc9hbthqv7u6qinde60wghjcrt5waeg8p6izcpklkalpuv6s91ejoho2we76ork5mjtw1dlwehiwcvj0z7f7xsp55wxb7o1dty4go18lzi4m8mme5bf9yeztt7x089sjm39va11af4511jgl75d6h == \v\k\r\d\k\p\c\a\5\1\t\w\s\c\b\a\c\2\l\v\c\7\0\1\t\c\z\i\q\w\b\j\b\l\9\9\0\y\2\m\u\g\z\q\h\d\6\2\q\z\q\w\d\o\l\j\v\j\q\y\v\2\d\s\x\y\u\d\v\1\u\d\j\a\e\o\g\x\6\d\e\o\5\d\h\o\s\w\8\l\d\y\r\b\5\q\r\3\u\v\5\i\2\5\7\k\6\g\z\x\o\z\q\n\4\a\2\v\x\8\6\v\r\l\l\s\i\3\s\j\3\l\o\2\6\5\u\7\h\i\1\t\e\2\9\p\o\b\v\0\3\2\f\j\u\h\6\l\n\2\o\l\d\d\s\q\k\8\7\u\8\a\5\r\8\t\d\p\4\q\h\s\u\y\g\h\p\4\l\p\5\7\t\y\5\y\c\f\f\m\v\7\a\9\r\d\3\p\k\r\6\u\2\b\1\9\p\o\a\9\y\b\m\6\h\o\0\l\6\u\5\e\w\2\f\z\2\p\k\0\u\c\s\6\9\o\8\q\b\o\c\s\n\h\a\i\n\u\9\j\7\y\o\a\7\u\g\4\0\k\l\q\n\l\3\n\y\e\7\6\p\u\k\m\y\r\l\s\b\m\v\b\q\4\a\b\7\6\g\s\d\a\g\o\x\c\z\1\a\e\0\9\o\g\4\z\v\p\s\t\e\a\i\p\e\n\d\h\y\v\a\c\m\8\u\6\7\4\p\c\3\5\v\d\b\l\4\v\7\0\p\2\a\h\h\z\f\5\g\e\y\x\u\w\c\9\h\b\t\h\q\v\7\u\6\q\i\n\d\e\6\0\w\g\h\j\c\r\t\5\w\a\e\g\8\p\6\i\z\c\p\k\l\k\a\l\p\u\v\6\s\9\1\e\j\o\h\o\2\w\e\7\6\o\r\k\5\m\j\t\w\1\d\l\w\e\h\i\w\c\v\j\0\z\7\f\7\x\s\p\5\5\w\x\b\7\o\1\d\t\y\4\g\o\1\8\l\z\i\4\m\8\m\m\e\5\b\f\9\y\e\z\t\t\7\x\0\8\9\s\j\m\3\9\v\a\1\1\a\f\4\5\1\1\j\g\l\7\5\d\6\h ]] 00:07:52.603 09:45:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.603 09:45:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:52.603 [2024-12-06 09:45:17.778159] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:52.603 [2024-12-06 09:45:17.778225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60492 ] 00:07:52.863 [2024-12-06 09:45:17.913091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.863 [2024-12-06 09:45:17.958894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.863 [2024-12-06 09:45:18.015960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.863  [2024-12-06T09:45:18.396Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.124 00:07:53.124 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vkrdkpca51twscbac2lvc701tcziqwbjbl990y2mugzqhd62qzqwdoljvjqyv2dsxyudv1udjaeogx6deo5dhosw8ldyrb5qr3uv5i257k6gzxozqn4a2vx86vrllsi3sj3lo265u7hi1te29pobv032fjuh6ln2olddsqk87u8a5r8tdp4qhsuyghp4lp57ty5ycffmv7a9rd3pkr6u2b19poa9ybm6ho0l6u5ew2fz2pk0ucs69o8qbocsnhainu9j7yoa7ug40klqnl3nye76pukmyrlsbmvbq4ab76gsdagoxcz1ae09og4zvpsteaipendhyvacm8u674pc35vdbl4v70p2ahhzf5geyxuwc9hbthqv7u6qinde60wghjcrt5waeg8p6izcpklkalpuv6s91ejoho2we76ork5mjtw1dlwehiwcvj0z7f7xsp55wxb7o1dty4go18lzi4m8mme5bf9yeztt7x089sjm39va11af4511jgl75d6h == \v\k\r\d\k\p\c\a\5\1\t\w\s\c\b\a\c\2\l\v\c\7\0\1\t\c\z\i\q\w\b\j\b\l\9\9\0\y\2\m\u\g\z\q\h\d\6\2\q\z\q\w\d\o\l\j\v\j\q\y\v\2\d\s\x\y\u\d\v\1\u\d\j\a\e\o\g\x\6\d\e\o\5\d\h\o\s\w\8\l\d\y\r\b\5\q\r\3\u\v\5\i\2\5\7\k\6\g\z\x\o\z\q\n\4\a\2\v\x\8\6\v\r\l\l\s\i\3\s\j\3\l\o\2\6\5\u\7\h\i\1\t\e\2\9\p\o\b\v\0\3\2\f\j\u\h\6\l\n\2\o\l\d\d\s\q\k\8\7\u\8\a\5\r\8\t\d\p\4\q\h\s\u\y\g\h\p\4\l\p\5\7\t\y\5\y\c\f\f\m\v\7\a\9\r\d\3\p\k\r\6\u\2\b\1\9\p\o\a\9\y\b\m\6\h\o\0\l\6\u\5\e\w\2\f\z\2\p\k\0\u\c\s\6\9\o\8\q\b\o\c\s\n\h\a\i\n\u\9\j\7\y\o\a\7\u\g\4\0\k\l\q\n\l\3\n\y\e\7\6\p\u\k\m\y\r\l\s\b\m\v\b\q\4\a\b\7\6\g\s\d\a\g\o\x\c\z\1\a\e\0\9\o\g\4\z\v\p\s\t\e\a\i\p\e\n\d\h\y\v\a\c\m\8\u\6\7\4\p\c\3\5\v\d\b\l\4\v\7\0\p\2\a\h\h\z\f\5\g\e\y\x\u\w\c\9\h\b\t\h\q\v\7\u\6\q\i\n\d\e\6\0\w\g\h\j\c\r\t\5\w\a\e\g\8\p\6\i\z\c\p\k\l\k\a\l\p\u\v\6\s\9\1\e\j\o\h\o\2\w\e\7\6\o\r\k\5\m\j\t\w\1\d\l\w\e\h\i\w\c\v\j\0\z\7\f\7\x\s\p\5\5\w\x\b\7\o\1\d\t\y\4\g\o\1\8\l\z\i\4\m\8\m\m\e\5\b\f\9\y\e\z\t\t\7\x\0\8\9\s\j\m\3\9\v\a\1\1\a\f\4\5\1\1\j\g\l\7\5\d\6\h ]] 00:07:53.124 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:53.124 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:53.124 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:53.124 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:53.124 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.124 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:53.124 [2024-12-06 09:45:18.336194] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:53.124 [2024-12-06 09:45:18.336326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60500 ] 00:07:53.383 [2024-12-06 09:45:18.482277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.383 [2024-12-06 09:45:18.533374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.383 [2024-12-06 09:45:18.590074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.383  [2024-12-06T09:45:18.915Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.643 00:07:53.643 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e4ok3kkm2l2tpcpjecis76tiwqq1o2tyaoidvd63329emawg7cmdhvgyc7l282kv1fbv11zv86jv0me6fp0dsz55p6mk8s6m1z8ox8rtf33fihe92qfnm4n2ik880p0hy6kir2kospgfdid2070ibsq6y47mqvpbak9a79ufnd094klwvjshj1jnz70o61d6kpl5eassvxgtelht6lpleczb7xik6jr1spfj116tfrsnto82c2e45hfg5brv6la1ptrec3yd2bgbgk0lpvnhvbfjeos6ryjfga4kgxdvwmmh2qr4otahxqo0sc8b4dr7vysataurqh3ng5ytxbfvwklk20nfcplom6nkzhwi7e8udk2admh9nbh61kopb9hju1qtbzg3jqlcq0icqrvnj6vl1gvly7f8h6u7ath9pkbymmd8u02h69509e94ayzus3h2su0g8s9ojxje5mpbwl52zatdsqj68xmq4se1xpoqu6bvecy64q497jet4zk9 == \e\4\o\k\3\k\k\m\2\l\2\t\p\c\p\j\e\c\i\s\7\6\t\i\w\q\q\1\o\2\t\y\a\o\i\d\v\d\6\3\3\2\9\e\m\a\w\g\7\c\m\d\h\v\g\y\c\7\l\2\8\2\k\v\1\f\b\v\1\1\z\v\8\6\j\v\0\m\e\6\f\p\0\d\s\z\5\5\p\6\m\k\8\s\6\m\1\z\8\o\x\8\r\t\f\3\3\f\i\h\e\9\2\q\f\n\m\4\n\2\i\k\8\8\0\p\0\h\y\6\k\i\r\2\k\o\s\p\g\f\d\i\d\2\0\7\0\i\b\s\q\6\y\4\7\m\q\v\p\b\a\k\9\a\7\9\u\f\n\d\0\9\4\k\l\w\v\j\s\h\j\1\j\n\z\7\0\o\6\1\d\6\k\p\l\5\e\a\s\s\v\x\g\t\e\l\h\t\6\l\p\l\e\c\z\b\7\x\i\k\6\j\r\1\s\p\f\j\1\1\6\t\f\r\s\n\t\o\8\2\c\2\e\4\5\h\f\g\5\b\r\v\6\l\a\1\p\t\r\e\c\3\y\d\2\b\g\b\g\k\0\l\p\v\n\h\v\b\f\j\e\o\s\6\r\y\j\f\g\a\4\k\g\x\d\v\w\m\m\h\2\q\r\4\o\t\a\h\x\q\o\0\s\c\8\b\4\d\r\7\v\y\s\a\t\a\u\r\q\h\3\n\g\5\y\t\x\b\f\v\w\k\l\k\2\0\n\f\c\p\l\o\m\6\n\k\z\h\w\i\7\e\8\u\d\k\2\a\d\m\h\9\n\b\h\6\1\k\o\p\b\9\h\j\u\1\q\t\b\z\g\3\j\q\l\c\q\0\i\c\q\r\v\n\j\6\v\l\1\g\v\l\y\7\f\8\h\6\u\7\a\t\h\9\p\k\b\y\m\m\d\8\u\0\2\h\6\9\5\0\9\e\9\4\a\y\z\u\s\3\h\2\s\u\0\g\8\s\9\o\j\x\j\e\5\m\p\b\w\l\5\2\z\a\t\d\s\q\j\6\8\x\m\q\4\s\e\1\x\p\o\q\u\6\b\v\e\c\y\6\4\q\4\9\7\j\e\t\4\z\k\9 ]] 00:07:53.643 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.643 09:45:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:53.643 [2024-12-06 09:45:18.892367] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:53.643 [2024-12-06 09:45:18.892501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60507 ] 00:07:53.903 [2024-12-06 09:45:19.041702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.903 [2024-12-06 09:45:19.081838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.903 [2024-12-06 09:45:19.132982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.903  [2024-12-06T09:45:19.434Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.162 00:07:54.162 09:45:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e4ok3kkm2l2tpcpjecis76tiwqq1o2tyaoidvd63329emawg7cmdhvgyc7l282kv1fbv11zv86jv0me6fp0dsz55p6mk8s6m1z8ox8rtf33fihe92qfnm4n2ik880p0hy6kir2kospgfdid2070ibsq6y47mqvpbak9a79ufnd094klwvjshj1jnz70o61d6kpl5eassvxgtelht6lpleczb7xik6jr1spfj116tfrsnto82c2e45hfg5brv6la1ptrec3yd2bgbgk0lpvnhvbfjeos6ryjfga4kgxdvwmmh2qr4otahxqo0sc8b4dr7vysataurqh3ng5ytxbfvwklk20nfcplom6nkzhwi7e8udk2admh9nbh61kopb9hju1qtbzg3jqlcq0icqrvnj6vl1gvly7f8h6u7ath9pkbymmd8u02h69509e94ayzus3h2su0g8s9ojxje5mpbwl52zatdsqj68xmq4se1xpoqu6bvecy64q497jet4zk9 == \e\4\o\k\3\k\k\m\2\l\2\t\p\c\p\j\e\c\i\s\7\6\t\i\w\q\q\1\o\2\t\y\a\o\i\d\v\d\6\3\3\2\9\e\m\a\w\g\7\c\m\d\h\v\g\y\c\7\l\2\8\2\k\v\1\f\b\v\1\1\z\v\8\6\j\v\0\m\e\6\f\p\0\d\s\z\5\5\p\6\m\k\8\s\6\m\1\z\8\o\x\8\r\t\f\3\3\f\i\h\e\9\2\q\f\n\m\4\n\2\i\k\8\8\0\p\0\h\y\6\k\i\r\2\k\o\s\p\g\f\d\i\d\2\0\7\0\i\b\s\q\6\y\4\7\m\q\v\p\b\a\k\9\a\7\9\u\f\n\d\0\9\4\k\l\w\v\j\s\h\j\1\j\n\z\7\0\o\6\1\d\6\k\p\l\5\e\a\s\s\v\x\g\t\e\l\h\t\6\l\p\l\e\c\z\b\7\x\i\k\6\j\r\1\s\p\f\j\1\1\6\t\f\r\s\n\t\o\8\2\c\2\e\4\5\h\f\g\5\b\r\v\6\l\a\1\p\t\r\e\c\3\y\d\2\b\g\b\g\k\0\l\p\v\n\h\v\b\f\j\e\o\s\6\r\y\j\f\g\a\4\k\g\x\d\v\w\m\m\h\2\q\r\4\o\t\a\h\x\q\o\0\s\c\8\b\4\d\r\7\v\y\s\a\t\a\u\r\q\h\3\n\g\5\y\t\x\b\f\v\w\k\l\k\2\0\n\f\c\p\l\o\m\6\n\k\z\h\w\i\7\e\8\u\d\k\2\a\d\m\h\9\n\b\h\6\1\k\o\p\b\9\h\j\u\1\q\t\b\z\g\3\j\q\l\c\q\0\i\c\q\r\v\n\j\6\v\l\1\g\v\l\y\7\f\8\h\6\u\7\a\t\h\9\p\k\b\y\m\m\d\8\u\0\2\h\6\9\5\0\9\e\9\4\a\y\z\u\s\3\h\2\s\u\0\g\8\s\9\o\j\x\j\e\5\m\p\b\w\l\5\2\z\a\t\d\s\q\j\6\8\x\m\q\4\s\e\1\x\p\o\q\u\6\b\v\e\c\y\6\4\q\4\9\7\j\e\t\4\z\k\9 ]] 00:07:54.162 09:45:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.162 09:45:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:54.162 [2024-12-06 09:45:19.424505] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:54.162 [2024-12-06 09:45:19.424666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60515 ] 00:07:54.421 [2024-12-06 09:45:19.564712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.421 [2024-12-06 09:45:19.604307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.421 [2024-12-06 09:45:19.661062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.680  [2024-12-06T09:45:19.952Z] Copying: 512/512 [B] (average 250 kBps) 00:07:54.680 00:07:54.680 09:45:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e4ok3kkm2l2tpcpjecis76tiwqq1o2tyaoidvd63329emawg7cmdhvgyc7l282kv1fbv11zv86jv0me6fp0dsz55p6mk8s6m1z8ox8rtf33fihe92qfnm4n2ik880p0hy6kir2kospgfdid2070ibsq6y47mqvpbak9a79ufnd094klwvjshj1jnz70o61d6kpl5eassvxgtelht6lpleczb7xik6jr1spfj116tfrsnto82c2e45hfg5brv6la1ptrec3yd2bgbgk0lpvnhvbfjeos6ryjfga4kgxdvwmmh2qr4otahxqo0sc8b4dr7vysataurqh3ng5ytxbfvwklk20nfcplom6nkzhwi7e8udk2admh9nbh61kopb9hju1qtbzg3jqlcq0icqrvnj6vl1gvly7f8h6u7ath9pkbymmd8u02h69509e94ayzus3h2su0g8s9ojxje5mpbwl52zatdsqj68xmq4se1xpoqu6bvecy64q497jet4zk9 == \e\4\o\k\3\k\k\m\2\l\2\t\p\c\p\j\e\c\i\s\7\6\t\i\w\q\q\1\o\2\t\y\a\o\i\d\v\d\6\3\3\2\9\e\m\a\w\g\7\c\m\d\h\v\g\y\c\7\l\2\8\2\k\v\1\f\b\v\1\1\z\v\8\6\j\v\0\m\e\6\f\p\0\d\s\z\5\5\p\6\m\k\8\s\6\m\1\z\8\o\x\8\r\t\f\3\3\f\i\h\e\9\2\q\f\n\m\4\n\2\i\k\8\8\0\p\0\h\y\6\k\i\r\2\k\o\s\p\g\f\d\i\d\2\0\7\0\i\b\s\q\6\y\4\7\m\q\v\p\b\a\k\9\a\7\9\u\f\n\d\0\9\4\k\l\w\v\j\s\h\j\1\j\n\z\7\0\o\6\1\d\6\k\p\l\5\e\a\s\s\v\x\g\t\e\l\h\t\6\l\p\l\e\c\z\b\7\x\i\k\6\j\r\1\s\p\f\j\1\1\6\t\f\r\s\n\t\o\8\2\c\2\e\4\5\h\f\g\5\b\r\v\6\l\a\1\p\t\r\e\c\3\y\d\2\b\g\b\g\k\0\l\p\v\n\h\v\b\f\j\e\o\s\6\r\y\j\f\g\a\4\k\g\x\d\v\w\m\m\h\2\q\r\4\o\t\a\h\x\q\o\0\s\c\8\b\4\d\r\7\v\y\s\a\t\a\u\r\q\h\3\n\g\5\y\t\x\b\f\v\w\k\l\k\2\0\n\f\c\p\l\o\m\6\n\k\z\h\w\i\7\e\8\u\d\k\2\a\d\m\h\9\n\b\h\6\1\k\o\p\b\9\h\j\u\1\q\t\b\z\g\3\j\q\l\c\q\0\i\c\q\r\v\n\j\6\v\l\1\g\v\l\y\7\f\8\h\6\u\7\a\t\h\9\p\k\b\y\m\m\d\8\u\0\2\h\6\9\5\0\9\e\9\4\a\y\z\u\s\3\h\2\s\u\0\g\8\s\9\o\j\x\j\e\5\m\p\b\w\l\5\2\z\a\t\d\s\q\j\6\8\x\m\q\4\s\e\1\x\p\o\q\u\6\b\v\e\c\y\6\4\q\4\9\7\j\e\t\4\z\k\9 ]] 00:07:54.680 09:45:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.680 09:45:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:54.938 [2024-12-06 09:45:20.001591] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:54.938 [2024-12-06 09:45:20.001736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60522 ] 00:07:54.938 [2024-12-06 09:45:20.150890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.938 [2024-12-06 09:45:20.206948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.198 [2024-12-06 09:45:20.267794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.198  [2024-12-06T09:45:20.730Z] Copying: 512/512 [B] (average 166 kBps) 00:07:55.458 00:07:55.458 09:45:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e4ok3kkm2l2tpcpjecis76tiwqq1o2tyaoidvd63329emawg7cmdhvgyc7l282kv1fbv11zv86jv0me6fp0dsz55p6mk8s6m1z8ox8rtf33fihe92qfnm4n2ik880p0hy6kir2kospgfdid2070ibsq6y47mqvpbak9a79ufnd094klwvjshj1jnz70o61d6kpl5eassvxgtelht6lpleczb7xik6jr1spfj116tfrsnto82c2e45hfg5brv6la1ptrec3yd2bgbgk0lpvnhvbfjeos6ryjfga4kgxdvwmmh2qr4otahxqo0sc8b4dr7vysataurqh3ng5ytxbfvwklk20nfcplom6nkzhwi7e8udk2admh9nbh61kopb9hju1qtbzg3jqlcq0icqrvnj6vl1gvly7f8h6u7ath9pkbymmd8u02h69509e94ayzus3h2su0g8s9ojxje5mpbwl52zatdsqj68xmq4se1xpoqu6bvecy64q497jet4zk9 == \e\4\o\k\3\k\k\m\2\l\2\t\p\c\p\j\e\c\i\s\7\6\t\i\w\q\q\1\o\2\t\y\a\o\i\d\v\d\6\3\3\2\9\e\m\a\w\g\7\c\m\d\h\v\g\y\c\7\l\2\8\2\k\v\1\f\b\v\1\1\z\v\8\6\j\v\0\m\e\6\f\p\0\d\s\z\5\5\p\6\m\k\8\s\6\m\1\z\8\o\x\8\r\t\f\3\3\f\i\h\e\9\2\q\f\n\m\4\n\2\i\k\8\8\0\p\0\h\y\6\k\i\r\2\k\o\s\p\g\f\d\i\d\2\0\7\0\i\b\s\q\6\y\4\7\m\q\v\p\b\a\k\9\a\7\9\u\f\n\d\0\9\4\k\l\w\v\j\s\h\j\1\j\n\z\7\0\o\6\1\d\6\k\p\l\5\e\a\s\s\v\x\g\t\e\l\h\t\6\l\p\l\e\c\z\b\7\x\i\k\6\j\r\1\s\p\f\j\1\1\6\t\f\r\s\n\t\o\8\2\c\2\e\4\5\h\f\g\5\b\r\v\6\l\a\1\p\t\r\e\c\3\y\d\2\b\g\b\g\k\0\l\p\v\n\h\v\b\f\j\e\o\s\6\r\y\j\f\g\a\4\k\g\x\d\v\w\m\m\h\2\q\r\4\o\t\a\h\x\q\o\0\s\c\8\b\4\d\r\7\v\y\s\a\t\a\u\r\q\h\3\n\g\5\y\t\x\b\f\v\w\k\l\k\2\0\n\f\c\p\l\o\m\6\n\k\z\h\w\i\7\e\8\u\d\k\2\a\d\m\h\9\n\b\h\6\1\k\o\p\b\9\h\j\u\1\q\t\b\z\g\3\j\q\l\c\q\0\i\c\q\r\v\n\j\6\v\l\1\g\v\l\y\7\f\8\h\6\u\7\a\t\h\9\p\k\b\y\m\m\d\8\u\0\2\h\6\9\5\0\9\e\9\4\a\y\z\u\s\3\h\2\s\u\0\g\8\s\9\o\j\x\j\e\5\m\p\b\w\l\5\2\z\a\t\d\s\q\j\6\8\x\m\q\4\s\e\1\x\p\o\q\u\6\b\v\e\c\y\6\4\q\4\9\7\j\e\t\4\z\k\9 ]] 00:07:55.458 00:07:55.458 real 0m4.479s 00:07:55.458 user 0m2.350s 00:07:55.458 sys 0m1.141s 00:07:55.458 09:45:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.458 09:45:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:55.458 ************************************ 00:07:55.458 END TEST dd_flags_misc_forced_aio 00:07:55.458 ************************************ 00:07:55.458 09:45:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:55.458 09:45:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:55.458 09:45:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:55.458 00:07:55.458 real 0m20.099s 00:07:55.458 user 0m9.388s 00:07:55.458 sys 0m6.646s 00:07:55.458 09:45:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.458 09:45:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:55.458 ************************************ 00:07:55.458 END TEST spdk_dd_posix 00:07:55.458 ************************************ 00:07:55.458 09:45:20 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:55.458 09:45:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.458 09:45:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.458 09:45:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:55.458 ************************************ 00:07:55.458 START TEST spdk_dd_malloc 00:07:55.458 ************************************ 00:07:55.458 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:55.458 * Looking for test storage... 00:07:55.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:55.458 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:55.458 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:55.458 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:55.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.719 --rc genhtml_branch_coverage=1 00:07:55.719 --rc genhtml_function_coverage=1 00:07:55.719 --rc genhtml_legend=1 00:07:55.719 --rc geninfo_all_blocks=1 00:07:55.719 --rc geninfo_unexecuted_blocks=1 00:07:55.719 00:07:55.719 ' 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:55.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.719 --rc genhtml_branch_coverage=1 00:07:55.719 --rc genhtml_function_coverage=1 00:07:55.719 --rc genhtml_legend=1 00:07:55.719 --rc geninfo_all_blocks=1 00:07:55.719 --rc geninfo_unexecuted_blocks=1 00:07:55.719 00:07:55.719 ' 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:55.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.719 --rc genhtml_branch_coverage=1 00:07:55.719 --rc genhtml_function_coverage=1 00:07:55.719 --rc genhtml_legend=1 00:07:55.719 --rc geninfo_all_blocks=1 00:07:55.719 --rc geninfo_unexecuted_blocks=1 00:07:55.719 00:07:55.719 ' 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:55.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.719 --rc genhtml_branch_coverage=1 00:07:55.719 --rc genhtml_function_coverage=1 00:07:55.719 --rc genhtml_legend=1 00:07:55.719 --rc geninfo_all_blocks=1 00:07:55.719 --rc geninfo_unexecuted_blocks=1 00:07:55.719 00:07:55.719 ' 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:55.719 ************************************ 00:07:55.719 START TEST dd_malloc_copy 00:07:55.719 ************************************ 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:55.719 09:45:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:55.719 { 00:07:55.719 "subsystems": [ 00:07:55.719 { 00:07:55.719 "subsystem": "bdev", 00:07:55.719 "config": [ 00:07:55.719 { 00:07:55.719 "params": { 00:07:55.719 "block_size": 512, 00:07:55.719 "num_blocks": 1048576, 00:07:55.719 "name": "malloc0" 00:07:55.719 }, 00:07:55.719 "method": "bdev_malloc_create" 00:07:55.719 }, 00:07:55.719 { 00:07:55.719 "params": { 00:07:55.720 "block_size": 512, 00:07:55.720 "num_blocks": 1048576, 00:07:55.720 "name": "malloc1" 00:07:55.720 }, 00:07:55.720 "method": "bdev_malloc_create" 00:07:55.720 }, 00:07:55.720 { 00:07:55.720 "method": "bdev_wait_for_examine" 00:07:55.720 } 00:07:55.720 ] 00:07:55.720 } 00:07:55.720 ] 00:07:55.720 } 00:07:55.720 [2024-12-06 09:45:20.898868] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:55.720 [2024-12-06 09:45:20.899000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60604 ] 00:07:55.980 [2024-12-06 09:45:21.044162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.980 [2024-12-06 09:45:21.101045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.980 [2024-12-06 09:45:21.155206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.358  [2024-12-06T09:45:23.568Z] Copying: 232/512 [MB] (232 MBps) [2024-12-06T09:45:24.137Z] Copying: 418/512 [MB] (186 MBps) [2024-12-06T09:45:24.705Z] Copying: 512/512 [MB] (average 212 MBps) 00:07:59.433 00:07:59.433 09:45:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:59.433 09:45:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:59.433 09:45:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:59.433 09:45:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:59.433 [2024-12-06 09:45:24.530647] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:59.433 [2024-12-06 09:45:24.530725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60652 ] 00:07:59.433 { 00:07:59.433 "subsystems": [ 00:07:59.433 { 00:07:59.433 "subsystem": "bdev", 00:07:59.433 "config": [ 00:07:59.433 { 00:07:59.433 "params": { 00:07:59.433 "block_size": 512, 00:07:59.433 "num_blocks": 1048576, 00:07:59.433 "name": "malloc0" 00:07:59.433 }, 00:07:59.433 "method": "bdev_malloc_create" 00:07:59.433 }, 00:07:59.433 { 00:07:59.433 "params": { 00:07:59.433 "block_size": 512, 00:07:59.433 "num_blocks": 1048576, 00:07:59.433 "name": "malloc1" 00:07:59.433 }, 00:07:59.433 "method": "bdev_malloc_create" 00:07:59.433 }, 00:07:59.433 { 00:07:59.433 "method": "bdev_wait_for_examine" 00:07:59.433 } 00:07:59.433 ] 00:07:59.433 } 00:07:59.433 ] 00:07:59.433 } 00:07:59.433 [2024-12-06 09:45:24.666507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.692 [2024-12-06 09:45:24.710329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.692 [2024-12-06 09:45:24.763404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.077  [2024-12-06T09:45:27.286Z] Copying: 189/512 [MB] (189 MBps) [2024-12-06T09:45:27.545Z] Copying: 419/512 [MB] (229 MBps) [2024-12-06T09:45:28.114Z] Copying: 512/512 [MB] (average 213 MBps) 00:08:02.842 00:08:02.842 00:08:02.842 real 0m7.238s 00:08:02.842 user 0m6.273s 00:08:02.842 sys 0m0.820s 00:08:02.842 09:45:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.842 09:45:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:02.842 ************************************ 00:08:02.842 END TEST dd_malloc_copy 00:08:02.842 ************************************ 00:08:03.102 00:08:03.102 real 0m7.485s 00:08:03.102 user 0m6.419s 00:08:03.102 sys 0m0.928s 00:08:03.102 09:45:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.102 09:45:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:03.102 ************************************ 00:08:03.102 END TEST spdk_dd_malloc 00:08:03.102 ************************************ 00:08:03.102 09:45:28 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:03.102 09:45:28 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:03.102 09:45:28 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.102 09:45:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:03.102 ************************************ 00:08:03.102 START TEST spdk_dd_bdev_to_bdev 00:08:03.102 ************************************ 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:03.102 * Looking for test storage... 00:08:03.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.102 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:03.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.102 --rc genhtml_branch_coverage=1 00:08:03.102 --rc genhtml_function_coverage=1 00:08:03.102 --rc genhtml_legend=1 00:08:03.102 --rc geninfo_all_blocks=1 00:08:03.102 --rc geninfo_unexecuted_blocks=1 00:08:03.102 00:08:03.103 ' 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:03.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.103 --rc genhtml_branch_coverage=1 00:08:03.103 --rc genhtml_function_coverage=1 00:08:03.103 --rc genhtml_legend=1 00:08:03.103 --rc geninfo_all_blocks=1 00:08:03.103 --rc geninfo_unexecuted_blocks=1 00:08:03.103 00:08:03.103 ' 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:03.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.103 --rc genhtml_branch_coverage=1 00:08:03.103 --rc genhtml_function_coverage=1 00:08:03.103 --rc genhtml_legend=1 00:08:03.103 --rc geninfo_all_blocks=1 00:08:03.103 --rc geninfo_unexecuted_blocks=1 00:08:03.103 00:08:03.103 ' 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:03.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.103 --rc genhtml_branch_coverage=1 00:08:03.103 --rc genhtml_function_coverage=1 00:08:03.103 --rc genhtml_legend=1 00:08:03.103 --rc geninfo_all_blocks=1 00:08:03.103 --rc geninfo_unexecuted_blocks=1 00:08:03.103 00:08:03.103 ' 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:03.103 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.363 ************************************ 00:08:03.363 START TEST dd_inflate_file 00:08:03.363 ************************************ 00:08:03.363 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:03.363 [2024-12-06 09:45:28.448329] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:03.363 [2024-12-06 09:45:28.449141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60764 ] 00:08:03.363 [2024-12-06 09:45:28.596035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.622 [2024-12-06 09:45:28.643459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.622 [2024-12-06 09:45:28.696304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.622  [2024-12-06T09:45:29.154Z] Copying: 64/64 [MB] (average 1254 MBps) 00:08:03.882 00:08:03.882 00:08:03.882 real 0m0.575s 00:08:03.882 user 0m0.325s 00:08:03.882 sys 0m0.319s 00:08:03.882 ************************************ 00:08:03.882 END TEST dd_inflate_file 00:08:03.882 ************************************ 00:08:03.882 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.882 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 09:45:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 ************************************ 00:08:03.882 START TEST dd_copy_to_out_bdev 00:08:03.882 ************************************ 00:08:03.882 09:45:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:03.882 { 00:08:03.882 "subsystems": [ 00:08:03.882 { 00:08:03.882 "subsystem": "bdev", 00:08:03.882 "config": [ 00:08:03.882 { 00:08:03.882 "params": { 00:08:03.882 "trtype": "pcie", 00:08:03.882 "traddr": "0000:00:10.0", 00:08:03.882 "name": "Nvme0" 00:08:03.882 }, 00:08:03.882 "method": "bdev_nvme_attach_controller" 00:08:03.882 }, 00:08:03.882 { 00:08:03.882 "params": { 00:08:03.882 "trtype": "pcie", 00:08:03.882 "traddr": "0000:00:11.0", 00:08:03.882 "name": "Nvme1" 00:08:03.882 }, 00:08:03.882 "method": "bdev_nvme_attach_controller" 00:08:03.882 }, 00:08:03.882 { 00:08:03.882 "method": "bdev_wait_for_examine" 00:08:03.882 } 00:08:03.882 ] 00:08:03.882 } 00:08:03.882 ] 00:08:03.882 } 00:08:03.882 [2024-12-06 09:45:29.075737] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:03.882 [2024-12-06 09:45:29.075853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60803 ] 00:08:04.142 [2024-12-06 09:45:29.227280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.142 [2024-12-06 09:45:29.288844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.142 [2024-12-06 09:45:29.352719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.515  [2024-12-06T09:45:31.046Z] Copying: 49/64 [MB] (49 MBps) [2024-12-06T09:45:31.305Z] Copying: 64/64 [MB] (average 49 MBps) 00:08:06.033 00:08:06.033 00:08:06.033 real 0m2.073s 00:08:06.033 user 0m1.808s 00:08:06.033 sys 0m1.692s 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:06.033 ************************************ 00:08:06.033 END TEST dd_copy_to_out_bdev 00:08:06.033 ************************************ 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:06.033 ************************************ 00:08:06.033 START TEST dd_offset_magic 00:08:06.033 ************************************ 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:06.033 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:06.033 [2024-12-06 09:45:31.197969] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:06.033 [2024-12-06 09:45:31.198049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60848 ] 00:08:06.033 { 00:08:06.033 "subsystems": [ 00:08:06.033 { 00:08:06.033 "subsystem": "bdev", 00:08:06.033 "config": [ 00:08:06.033 { 00:08:06.033 "params": { 00:08:06.033 "trtype": "pcie", 00:08:06.033 "traddr": "0000:00:10.0", 00:08:06.033 "name": "Nvme0" 00:08:06.033 }, 00:08:06.033 "method": "bdev_nvme_attach_controller" 00:08:06.033 }, 00:08:06.033 { 00:08:06.033 "params": { 00:08:06.033 "trtype": "pcie", 00:08:06.033 "traddr": "0000:00:11.0", 00:08:06.033 "name": "Nvme1" 00:08:06.033 }, 00:08:06.033 "method": "bdev_nvme_attach_controller" 00:08:06.033 }, 00:08:06.033 { 00:08:06.033 "method": "bdev_wait_for_examine" 00:08:06.033 } 00:08:06.033 ] 00:08:06.033 } 00:08:06.033 ] 00:08:06.033 } 00:08:06.292 [2024-12-06 09:45:31.341091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.292 [2024-12-06 09:45:31.393067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.292 [2024-12-06 09:45:31.456479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.552  [2024-12-06T09:45:32.082Z] Copying: 65/65 [MB] (average 812 MBps) 00:08:06.810 00:08:06.810 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:06.810 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:06.810 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:06.810 09:45:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:06.810 [2024-12-06 09:45:32.036950] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:06.810 [2024-12-06 09:45:32.037149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60863 ] 00:08:06.810 { 00:08:06.810 "subsystems": [ 00:08:06.810 { 00:08:06.810 "subsystem": "bdev", 00:08:06.810 "config": [ 00:08:06.810 { 00:08:06.810 "params": { 00:08:06.810 "trtype": "pcie", 00:08:06.810 "traddr": "0000:00:10.0", 00:08:06.810 "name": "Nvme0" 00:08:06.810 }, 00:08:06.810 "method": "bdev_nvme_attach_controller" 00:08:06.810 }, 00:08:06.811 { 00:08:06.811 "params": { 00:08:06.811 "trtype": "pcie", 00:08:06.811 "traddr": "0000:00:11.0", 00:08:06.811 "name": "Nvme1" 00:08:06.811 }, 00:08:06.811 "method": "bdev_nvme_attach_controller" 00:08:06.811 }, 00:08:06.811 { 00:08:06.811 "method": "bdev_wait_for_examine" 00:08:06.811 } 00:08:06.811 ] 00:08:06.811 } 00:08:06.811 ] 00:08:06.811 } 00:08:07.069 [2024-12-06 09:45:32.181874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.069 [2024-12-06 09:45:32.231377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.069 [2024-12-06 09:45:32.293274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.328  [2024-12-06T09:45:32.858Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:07.586 00:08:07.586 09:45:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:07.586 09:45:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:07.586 09:45:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:07.586 09:45:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:07.586 09:45:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:07.586 09:45:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:07.586 09:45:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 [2024-12-06 09:45:32.745301] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:07.586 [2024-12-06 09:45:32.745370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60885 ] 00:08:07.586 { 00:08:07.586 "subsystems": [ 00:08:07.586 { 00:08:07.586 "subsystem": "bdev", 00:08:07.586 "config": [ 00:08:07.586 { 00:08:07.586 "params": { 00:08:07.586 "trtype": "pcie", 00:08:07.586 "traddr": "0000:00:10.0", 00:08:07.586 "name": "Nvme0" 00:08:07.586 }, 00:08:07.586 "method": "bdev_nvme_attach_controller" 00:08:07.586 }, 00:08:07.586 { 00:08:07.586 "params": { 00:08:07.586 "trtype": "pcie", 00:08:07.586 "traddr": "0000:00:11.0", 00:08:07.586 "name": "Nvme1" 00:08:07.586 }, 00:08:07.586 "method": "bdev_nvme_attach_controller" 00:08:07.586 }, 00:08:07.586 { 00:08:07.586 "method": "bdev_wait_for_examine" 00:08:07.586 } 00:08:07.586 ] 00:08:07.586 } 00:08:07.586 ] 00:08:07.586 } 00:08:07.845 [2024-12-06 09:45:32.888309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.845 [2024-12-06 09:45:32.940622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.845 [2024-12-06 09:45:33.003036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.103  [2024-12-06T09:45:33.636Z] Copying: 65/65 [MB] (average 915 MBps) 00:08:08.364 00:08:08.364 09:45:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:08.364 09:45:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:08.364 09:45:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:08.364 09:45:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:08.364 { 00:08:08.364 "subsystems": [ 00:08:08.364 { 00:08:08.364 "subsystem": "bdev", 00:08:08.364 "config": [ 00:08:08.364 { 00:08:08.364 "params": { 00:08:08.364 "trtype": "pcie", 00:08:08.364 "traddr": "0000:00:10.0", 00:08:08.364 "name": "Nvme0" 00:08:08.364 }, 00:08:08.364 "method": "bdev_nvme_attach_controller" 00:08:08.364 }, 00:08:08.364 { 00:08:08.364 "params": { 00:08:08.364 "trtype": "pcie", 00:08:08.364 "traddr": "0000:00:11.0", 00:08:08.364 "name": "Nvme1" 00:08:08.364 }, 00:08:08.364 "method": "bdev_nvme_attach_controller" 00:08:08.364 }, 00:08:08.364 { 00:08:08.364 "method": "bdev_wait_for_examine" 00:08:08.364 } 00:08:08.364 ] 00:08:08.364 } 00:08:08.364 ] 00:08:08.364 } 00:08:08.364 [2024-12-06 09:45:33.596236] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:08.364 [2024-12-06 09:45:33.596368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60900 ] 00:08:08.651 [2024-12-06 09:45:33.753525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.651 [2024-12-06 09:45:33.806135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.651 [2024-12-06 09:45:33.868157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.918  [2024-12-06T09:45:34.447Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:09.175 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:09.175 00:08:09.175 real 0m3.125s 00:08:09.175 user 0m2.214s 00:08:09.175 sys 0m1.020s 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:09.175 ************************************ 00:08:09.175 END TEST dd_offset_magic 00:08:09.175 ************************************ 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:09.175 09:45:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:09.175 [2024-12-06 09:45:34.370135] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:09.175 [2024-12-06 09:45:34.370245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60937 ] 00:08:09.175 { 00:08:09.175 "subsystems": [ 00:08:09.175 { 00:08:09.175 "subsystem": "bdev", 00:08:09.175 "config": [ 00:08:09.175 { 00:08:09.175 "params": { 00:08:09.175 "trtype": "pcie", 00:08:09.175 "traddr": "0000:00:10.0", 00:08:09.175 "name": "Nvme0" 00:08:09.175 }, 00:08:09.175 "method": "bdev_nvme_attach_controller" 00:08:09.175 }, 00:08:09.175 { 00:08:09.175 "params": { 00:08:09.175 "trtype": "pcie", 00:08:09.175 "traddr": "0000:00:11.0", 00:08:09.175 "name": "Nvme1" 00:08:09.175 }, 00:08:09.175 "method": "bdev_nvme_attach_controller" 00:08:09.175 }, 00:08:09.175 { 00:08:09.176 "method": "bdev_wait_for_examine" 00:08:09.176 } 00:08:09.176 ] 00:08:09.176 } 00:08:09.176 ] 00:08:09.176 } 00:08:09.433 [2024-12-06 09:45:34.517929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.433 [2024-12-06 09:45:34.571700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.433 [2024-12-06 09:45:34.634641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.692  [2024-12-06T09:45:35.223Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:09.951 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:09.951 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:09.951 [2024-12-06 09:45:35.096892] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:09.951 [2024-12-06 09:45:35.096990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60953 ] 00:08:09.951 { 00:08:09.951 "subsystems": [ 00:08:09.951 { 00:08:09.951 "subsystem": "bdev", 00:08:09.951 "config": [ 00:08:09.951 { 00:08:09.951 "params": { 00:08:09.951 "trtype": "pcie", 00:08:09.951 "traddr": "0000:00:10.0", 00:08:09.951 "name": "Nvme0" 00:08:09.951 }, 00:08:09.951 "method": "bdev_nvme_attach_controller" 00:08:09.951 }, 00:08:09.951 { 00:08:09.951 "params": { 00:08:09.951 "trtype": "pcie", 00:08:09.951 "traddr": "0000:00:11.0", 00:08:09.951 "name": "Nvme1" 00:08:09.951 }, 00:08:09.951 "method": "bdev_nvme_attach_controller" 00:08:09.951 }, 00:08:09.951 { 00:08:09.951 "method": "bdev_wait_for_examine" 00:08:09.951 } 00:08:09.951 ] 00:08:09.951 } 00:08:09.951 ] 00:08:09.951 } 00:08:10.209 [2024-12-06 09:45:35.244163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.209 [2024-12-06 09:45:35.296291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.209 [2024-12-06 09:45:35.359877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.468  [2024-12-06T09:45:35.998Z] Copying: 5120/5120 [kB] (average 625 MBps) 00:08:10.726 00:08:10.726 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:10.726 00:08:10.726 real 0m7.621s 00:08:10.726 user 0m5.556s 00:08:10.726 sys 0m3.810s 00:08:10.726 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.726 09:45:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:10.726 ************************************ 00:08:10.726 END TEST spdk_dd_bdev_to_bdev 00:08:10.726 ************************************ 00:08:10.726 09:45:35 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:10.726 09:45:35 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:10.726 09:45:35 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.726 09:45:35 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.726 09:45:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:10.726 ************************************ 00:08:10.726 START TEST spdk_dd_uring 00:08:10.726 ************************************ 00:08:10.726 09:45:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:10.726 * Looking for test storage... 00:08:10.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:10.726 09:45:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:10.726 09:45:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:10.726 09:45:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:10.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.985 --rc genhtml_branch_coverage=1 00:08:10.985 --rc genhtml_function_coverage=1 00:08:10.985 --rc genhtml_legend=1 00:08:10.985 --rc geninfo_all_blocks=1 00:08:10.985 --rc geninfo_unexecuted_blocks=1 00:08:10.985 00:08:10.985 ' 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:10.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.985 --rc genhtml_branch_coverage=1 00:08:10.985 --rc genhtml_function_coverage=1 00:08:10.985 --rc genhtml_legend=1 00:08:10.985 --rc geninfo_all_blocks=1 00:08:10.985 --rc geninfo_unexecuted_blocks=1 00:08:10.985 00:08:10.985 ' 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:10.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.985 --rc genhtml_branch_coverage=1 00:08:10.985 --rc genhtml_function_coverage=1 00:08:10.985 --rc genhtml_legend=1 00:08:10.985 --rc geninfo_all_blocks=1 00:08:10.985 --rc geninfo_unexecuted_blocks=1 00:08:10.985 00:08:10.985 ' 00:08:10.985 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:10.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.985 --rc genhtml_branch_coverage=1 00:08:10.985 --rc genhtml_function_coverage=1 00:08:10.985 --rc genhtml_legend=1 00:08:10.985 --rc geninfo_all_blocks=1 00:08:10.986 --rc geninfo_unexecuted_blocks=1 00:08:10.986 00:08:10.986 ' 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:10.986 ************************************ 00:08:10.986 START TEST dd_uring_copy 00:08:10.986 ************************************ 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=iu0atpuxfdjnlt60ystivwqfk98xx7ov8ue1gjruo8f7nxbmerjp5l5t1wvcvpcte0gdbsnkxxuw5u2bpdxri1zf12aytyx3otuoohbbktwdkgvpbybcwt9evr8yzuq7viwitfwfcil10ufhly7lmcm6qo1a0ceokizvdyugxhtzaojgf2zhbkwt3ed85u6uwogym9o2slrsdzd7a1yimhd5ho7hc07xabc3sbleeyk31fl6zn05slsbcp4n4xam39hecf4a2q325zmrc3zgmhjjlutezzf2mxmkbde4kpc9h1riqwaguywlm034ienwlax4jy9e75nwhk7suldanlkqtqssud8et2wocut6bt7bbsjb4p391bj8h8zq594pw7ie7339fk9cb7177wan9zwai3kefr3wvyeos7bgg4kq3o3ldrhcbwflxj3d0bjwhzs8t2w7z6krn8sfy3rk01nvzmtbgl6oar1w0prmma2v8cv6snaaqm867qsuit16jyyt5z4hxx1uswl4w3q2qdncyygqknvbqyuzt6kfku6qk7ky8qsywh59m1lifmyig2s9g7ywaaexh1050nxhlo4ziyj4r08uc0naatvzldi00nn82uj4h1n8ludafmyv6uaaicxymfolermsgk6kw0fi0adylzwom2lx8mcdyy92gahray47mflg52pbtlcpat4gifyae4dnqotfui5exadxc10jqwxinn7f9m9xb9vrkypd5r61w9wxn5kmm1dszkd1wjnw35crm9srofk4vi74g9t98knpuha4kf1avv36y2hfrsqlfroodhb9139ixvx1p4l5j9bi926q73ygl7xpf8yxdjbv20lf2lqfrwfiurq0mjk5vvi5i0t2p7w4qlcu9mc0snzv2ask61n3g79h9vba3gdz77e78596fg8pp9gf5686jdmvisn91w77wqbivrk5jz9b6x4bui5vxi59ct5wnurizkucpuda6virn28tuhwh8zbd4ot5qigr 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo iu0atpuxfdjnlt60ystivwqfk98xx7ov8ue1gjruo8f7nxbmerjp5l5t1wvcvpcte0gdbsnkxxuw5u2bpdxri1zf12aytyx3otuoohbbktwdkgvpbybcwt9evr8yzuq7viwitfwfcil10ufhly7lmcm6qo1a0ceokizvdyugxhtzaojgf2zhbkwt3ed85u6uwogym9o2slrsdzd7a1yimhd5ho7hc07xabc3sbleeyk31fl6zn05slsbcp4n4xam39hecf4a2q325zmrc3zgmhjjlutezzf2mxmkbde4kpc9h1riqwaguywlm034ienwlax4jy9e75nwhk7suldanlkqtqssud8et2wocut6bt7bbsjb4p391bj8h8zq594pw7ie7339fk9cb7177wan9zwai3kefr3wvyeos7bgg4kq3o3ldrhcbwflxj3d0bjwhzs8t2w7z6krn8sfy3rk01nvzmtbgl6oar1w0prmma2v8cv6snaaqm867qsuit16jyyt5z4hxx1uswl4w3q2qdncyygqknvbqyuzt6kfku6qk7ky8qsywh59m1lifmyig2s9g7ywaaexh1050nxhlo4ziyj4r08uc0naatvzldi00nn82uj4h1n8ludafmyv6uaaicxymfolermsgk6kw0fi0adylzwom2lx8mcdyy92gahray47mflg52pbtlcpat4gifyae4dnqotfui5exadxc10jqwxinn7f9m9xb9vrkypd5r61w9wxn5kmm1dszkd1wjnw35crm9srofk4vi74g9t98knpuha4kf1avv36y2hfrsqlfroodhb9139ixvx1p4l5j9bi926q73ygl7xpf8yxdjbv20lf2lqfrwfiurq0mjk5vvi5i0t2p7w4qlcu9mc0snzv2ask61n3g79h9vba3gdz77e78596fg8pp9gf5686jdmvisn91w77wqbivrk5jz9b6x4bui5vxi59ct5wnurizkucpuda6virn28tuhwh8zbd4ot5qigr 00:08:10.986 09:45:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:10.986 [2024-12-06 09:45:36.136324] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:10.986 [2024-12-06 09:45:36.136387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61032 ] 00:08:11.246 [2024-12-06 09:45:36.282643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.246 [2024-12-06 09:45:36.334985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.246 [2024-12-06 09:45:36.397078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.814  [2024-12-06T09:45:37.655Z] Copying: 511/511 [MB] (average 1224 MBps) 00:08:12.383 00:08:12.383 09:45:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:12.383 09:45:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:12.383 09:45:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:12.383 09:45:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.383 [2024-12-06 09:45:37.545741] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:12.383 [2024-12-06 09:45:37.545841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61054 ] 00:08:12.383 { 00:08:12.383 "subsystems": [ 00:08:12.383 { 00:08:12.383 "subsystem": "bdev", 00:08:12.383 "config": [ 00:08:12.383 { 00:08:12.383 "params": { 00:08:12.383 "block_size": 512, 00:08:12.383 "num_blocks": 1048576, 00:08:12.383 "name": "malloc0" 00:08:12.383 }, 00:08:12.383 "method": "bdev_malloc_create" 00:08:12.383 }, 00:08:12.383 { 00:08:12.383 "params": { 00:08:12.383 "filename": "/dev/zram1", 00:08:12.383 "name": "uring0" 00:08:12.383 }, 00:08:12.383 "method": "bdev_uring_create" 00:08:12.383 }, 00:08:12.383 { 00:08:12.383 "method": "bdev_wait_for_examine" 00:08:12.383 } 00:08:12.383 ] 00:08:12.383 } 00:08:12.383 ] 00:08:12.383 } 00:08:12.643 [2024-12-06 09:45:37.694035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.643 [2024-12-06 09:45:37.748094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.643 [2024-12-06 09:45:37.808415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.020  [2024-12-06T09:45:40.231Z] Copying: 181/512 [MB] (181 MBps) [2024-12-06T09:45:41.169Z] Copying: 360/512 [MB] (178 MBps) [2024-12-06T09:45:41.428Z] Copying: 512/512 [MB] (average 182 MBps) 00:08:16.156 00:08:16.156 09:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:16.156 09:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:16.156 09:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:16.156 09:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.156 [2024-12-06 09:45:41.283439] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:16.156 [2024-12-06 09:45:41.284042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61104 ] 00:08:16.156 { 00:08:16.156 "subsystems": [ 00:08:16.156 { 00:08:16.156 "subsystem": "bdev", 00:08:16.156 "config": [ 00:08:16.156 { 00:08:16.156 "params": { 00:08:16.156 "block_size": 512, 00:08:16.156 "num_blocks": 1048576, 00:08:16.156 "name": "malloc0" 00:08:16.156 }, 00:08:16.156 "method": "bdev_malloc_create" 00:08:16.156 }, 00:08:16.156 { 00:08:16.156 "params": { 00:08:16.156 "filename": "/dev/zram1", 00:08:16.156 "name": "uring0" 00:08:16.156 }, 00:08:16.156 "method": "bdev_uring_create" 00:08:16.156 }, 00:08:16.156 { 00:08:16.156 "method": "bdev_wait_for_examine" 00:08:16.156 } 00:08:16.156 ] 00:08:16.156 } 00:08:16.156 ] 00:08:16.156 } 00:08:16.416 [2024-12-06 09:45:41.428921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.416 [2024-12-06 09:45:41.482435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.416 [2024-12-06 09:45:41.537954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.793  [2024-12-06T09:45:44.002Z] Copying: 171/512 [MB] (171 MBps) [2024-12-06T09:45:44.938Z] Copying: 334/512 [MB] (162 MBps) [2024-12-06T09:45:45.198Z] Copying: 479/512 [MB] (144 MBps) [2024-12-06T09:45:45.456Z] Copying: 512/512 [MB] (average 157 MBps) 00:08:20.184 00:08:20.184 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:20.184 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ iu0atpuxfdjnlt60ystivwqfk98xx7ov8ue1gjruo8f7nxbmerjp5l5t1wvcvpcte0gdbsnkxxuw5u2bpdxri1zf12aytyx3otuoohbbktwdkgvpbybcwt9evr8yzuq7viwitfwfcil10ufhly7lmcm6qo1a0ceokizvdyugxhtzaojgf2zhbkwt3ed85u6uwogym9o2slrsdzd7a1yimhd5ho7hc07xabc3sbleeyk31fl6zn05slsbcp4n4xam39hecf4a2q325zmrc3zgmhjjlutezzf2mxmkbde4kpc9h1riqwaguywlm034ienwlax4jy9e75nwhk7suldanlkqtqssud8et2wocut6bt7bbsjb4p391bj8h8zq594pw7ie7339fk9cb7177wan9zwai3kefr3wvyeos7bgg4kq3o3ldrhcbwflxj3d0bjwhzs8t2w7z6krn8sfy3rk01nvzmtbgl6oar1w0prmma2v8cv6snaaqm867qsuit16jyyt5z4hxx1uswl4w3q2qdncyygqknvbqyuzt6kfku6qk7ky8qsywh59m1lifmyig2s9g7ywaaexh1050nxhlo4ziyj4r08uc0naatvzldi00nn82uj4h1n8ludafmyv6uaaicxymfolermsgk6kw0fi0adylzwom2lx8mcdyy92gahray47mflg52pbtlcpat4gifyae4dnqotfui5exadxc10jqwxinn7f9m9xb9vrkypd5r61w9wxn5kmm1dszkd1wjnw35crm9srofk4vi74g9t98knpuha4kf1avv36y2hfrsqlfroodhb9139ixvx1p4l5j9bi926q73ygl7xpf8yxdjbv20lf2lqfrwfiurq0mjk5vvi5i0t2p7w4qlcu9mc0snzv2ask61n3g79h9vba3gdz77e78596fg8pp9gf5686jdmvisn91w77wqbivrk5jz9b6x4bui5vxi59ct5wnurizkucpuda6virn28tuhwh8zbd4ot5qigr == \i\u\0\a\t\p\u\x\f\d\j\n\l\t\6\0\y\s\t\i\v\w\q\f\k\9\8\x\x\7\o\v\8\u\e\1\g\j\r\u\o\8\f\7\n\x\b\m\e\r\j\p\5\l\5\t\1\w\v\c\v\p\c\t\e\0\g\d\b\s\n\k\x\x\u\w\5\u\2\b\p\d\x\r\i\1\z\f\1\2\a\y\t\y\x\3\o\t\u\o\o\h\b\b\k\t\w\d\k\g\v\p\b\y\b\c\w\t\9\e\v\r\8\y\z\u\q\7\v\i\w\i\t\f\w\f\c\i\l\1\0\u\f\h\l\y\7\l\m\c\m\6\q\o\1\a\0\c\e\o\k\i\z\v\d\y\u\g\x\h\t\z\a\o\j\g\f\2\z\h\b\k\w\t\3\e\d\8\5\u\6\u\w\o\g\y\m\9\o\2\s\l\r\s\d\z\d\7\a\1\y\i\m\h\d\5\h\o\7\h\c\0\7\x\a\b\c\3\s\b\l\e\e\y\k\3\1\f\l\6\z\n\0\5\s\l\s\b\c\p\4\n\4\x\a\m\3\9\h\e\c\f\4\a\2\q\3\2\5\z\m\r\c\3\z\g\m\h\j\j\l\u\t\e\z\z\f\2\m\x\m\k\b\d\e\4\k\p\c\9\h\1\r\i\q\w\a\g\u\y\w\l\m\0\3\4\i\e\n\w\l\a\x\4\j\y\9\e\7\5\n\w\h\k\7\s\u\l\d\a\n\l\k\q\t\q\s\s\u\d\8\e\t\2\w\o\c\u\t\6\b\t\7\b\b\s\j\b\4\p\3\9\1\b\j\8\h\8\z\q\5\9\4\p\w\7\i\e\7\3\3\9\f\k\9\c\b\7\1\7\7\w\a\n\9\z\w\a\i\3\k\e\f\r\3\w\v\y\e\o\s\7\b\g\g\4\k\q\3\o\3\l\d\r\h\c\b\w\f\l\x\j\3\d\0\b\j\w\h\z\s\8\t\2\w\7\z\6\k\r\n\8\s\f\y\3\r\k\0\1\n\v\z\m\t\b\g\l\6\o\a\r\1\w\0\p\r\m\m\a\2\v\8\c\v\6\s\n\a\a\q\m\8\6\7\q\s\u\i\t\1\6\j\y\y\t\5\z\4\h\x\x\1\u\s\w\l\4\w\3\q\2\q\d\n\c\y\y\g\q\k\n\v\b\q\y\u\z\t\6\k\f\k\u\6\q\k\7\k\y\8\q\s\y\w\h\5\9\m\1\l\i\f\m\y\i\g\2\s\9\g\7\y\w\a\a\e\x\h\1\0\5\0\n\x\h\l\o\4\z\i\y\j\4\r\0\8\u\c\0\n\a\a\t\v\z\l\d\i\0\0\n\n\8\2\u\j\4\h\1\n\8\l\u\d\a\f\m\y\v\6\u\a\a\i\c\x\y\m\f\o\l\e\r\m\s\g\k\6\k\w\0\f\i\0\a\d\y\l\z\w\o\m\2\l\x\8\m\c\d\y\y\9\2\g\a\h\r\a\y\4\7\m\f\l\g\5\2\p\b\t\l\c\p\a\t\4\g\i\f\y\a\e\4\d\n\q\o\t\f\u\i\5\e\x\a\d\x\c\1\0\j\q\w\x\i\n\n\7\f\9\m\9\x\b\9\v\r\k\y\p\d\5\r\6\1\w\9\w\x\n\5\k\m\m\1\d\s\z\k\d\1\w\j\n\w\3\5\c\r\m\9\s\r\o\f\k\4\v\i\7\4\g\9\t\9\8\k\n\p\u\h\a\4\k\f\1\a\v\v\3\6\y\2\h\f\r\s\q\l\f\r\o\o\d\h\b\9\1\3\9\i\x\v\x\1\p\4\l\5\j\9\b\i\9\2\6\q\7\3\y\g\l\7\x\p\f\8\y\x\d\j\b\v\2\0\l\f\2\l\q\f\r\w\f\i\u\r\q\0\m\j\k\5\v\v\i\5\i\0\t\2\p\7\w\4\q\l\c\u\9\m\c\0\s\n\z\v\2\a\s\k\6\1\n\3\g\7\9\h\9\v\b\a\3\g\d\z\7\7\e\7\8\5\9\6\f\g\8\p\p\9\g\f\5\6\8\6\j\d\m\v\i\s\n\9\1\w\7\7\w\q\b\i\v\r\k\5\j\z\9\b\6\x\4\b\u\i\5\v\x\i\5\9\c\t\5\w\n\u\r\i\z\k\u\c\p\u\d\a\6\v\i\r\n\2\8\t\u\h\w\h\8\z\b\d\4\o\t\5\q\i\g\r ]] 00:08:20.184 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:20.184 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ iu0atpuxfdjnlt60ystivwqfk98xx7ov8ue1gjruo8f7nxbmerjp5l5t1wvcvpcte0gdbsnkxxuw5u2bpdxri1zf12aytyx3otuoohbbktwdkgvpbybcwt9evr8yzuq7viwitfwfcil10ufhly7lmcm6qo1a0ceokizvdyugxhtzaojgf2zhbkwt3ed85u6uwogym9o2slrsdzd7a1yimhd5ho7hc07xabc3sbleeyk31fl6zn05slsbcp4n4xam39hecf4a2q325zmrc3zgmhjjlutezzf2mxmkbde4kpc9h1riqwaguywlm034ienwlax4jy9e75nwhk7suldanlkqtqssud8et2wocut6bt7bbsjb4p391bj8h8zq594pw7ie7339fk9cb7177wan9zwai3kefr3wvyeos7bgg4kq3o3ldrhcbwflxj3d0bjwhzs8t2w7z6krn8sfy3rk01nvzmtbgl6oar1w0prmma2v8cv6snaaqm867qsuit16jyyt5z4hxx1uswl4w3q2qdncyygqknvbqyuzt6kfku6qk7ky8qsywh59m1lifmyig2s9g7ywaaexh1050nxhlo4ziyj4r08uc0naatvzldi00nn82uj4h1n8ludafmyv6uaaicxymfolermsgk6kw0fi0adylzwom2lx8mcdyy92gahray47mflg52pbtlcpat4gifyae4dnqotfui5exadxc10jqwxinn7f9m9xb9vrkypd5r61w9wxn5kmm1dszkd1wjnw35crm9srofk4vi74g9t98knpuha4kf1avv36y2hfrsqlfroodhb9139ixvx1p4l5j9bi926q73ygl7xpf8yxdjbv20lf2lqfrwfiurq0mjk5vvi5i0t2p7w4qlcu9mc0snzv2ask61n3g79h9vba3gdz77e78596fg8pp9gf5686jdmvisn91w77wqbivrk5jz9b6x4bui5vxi59ct5wnurizkucpuda6virn28tuhwh8zbd4ot5qigr == \i\u\0\a\t\p\u\x\f\d\j\n\l\t\6\0\y\s\t\i\v\w\q\f\k\9\8\x\x\7\o\v\8\u\e\1\g\j\r\u\o\8\f\7\n\x\b\m\e\r\j\p\5\l\5\t\1\w\v\c\v\p\c\t\e\0\g\d\b\s\n\k\x\x\u\w\5\u\2\b\p\d\x\r\i\1\z\f\1\2\a\y\t\y\x\3\o\t\u\o\o\h\b\b\k\t\w\d\k\g\v\p\b\y\b\c\w\t\9\e\v\r\8\y\z\u\q\7\v\i\w\i\t\f\w\f\c\i\l\1\0\u\f\h\l\y\7\l\m\c\m\6\q\o\1\a\0\c\e\o\k\i\z\v\d\y\u\g\x\h\t\z\a\o\j\g\f\2\z\h\b\k\w\t\3\e\d\8\5\u\6\u\w\o\g\y\m\9\o\2\s\l\r\s\d\z\d\7\a\1\y\i\m\h\d\5\h\o\7\h\c\0\7\x\a\b\c\3\s\b\l\e\e\y\k\3\1\f\l\6\z\n\0\5\s\l\s\b\c\p\4\n\4\x\a\m\3\9\h\e\c\f\4\a\2\q\3\2\5\z\m\r\c\3\z\g\m\h\j\j\l\u\t\e\z\z\f\2\m\x\m\k\b\d\e\4\k\p\c\9\h\1\r\i\q\w\a\g\u\y\w\l\m\0\3\4\i\e\n\w\l\a\x\4\j\y\9\e\7\5\n\w\h\k\7\s\u\l\d\a\n\l\k\q\t\q\s\s\u\d\8\e\t\2\w\o\c\u\t\6\b\t\7\b\b\s\j\b\4\p\3\9\1\b\j\8\h\8\z\q\5\9\4\p\w\7\i\e\7\3\3\9\f\k\9\c\b\7\1\7\7\w\a\n\9\z\w\a\i\3\k\e\f\r\3\w\v\y\e\o\s\7\b\g\g\4\k\q\3\o\3\l\d\r\h\c\b\w\f\l\x\j\3\d\0\b\j\w\h\z\s\8\t\2\w\7\z\6\k\r\n\8\s\f\y\3\r\k\0\1\n\v\z\m\t\b\g\l\6\o\a\r\1\w\0\p\r\m\m\a\2\v\8\c\v\6\s\n\a\a\q\m\8\6\7\q\s\u\i\t\1\6\j\y\y\t\5\z\4\h\x\x\1\u\s\w\l\4\w\3\q\2\q\d\n\c\y\y\g\q\k\n\v\b\q\y\u\z\t\6\k\f\k\u\6\q\k\7\k\y\8\q\s\y\w\h\5\9\m\1\l\i\f\m\y\i\g\2\s\9\g\7\y\w\a\a\e\x\h\1\0\5\0\n\x\h\l\o\4\z\i\y\j\4\r\0\8\u\c\0\n\a\a\t\v\z\l\d\i\0\0\n\n\8\2\u\j\4\h\1\n\8\l\u\d\a\f\m\y\v\6\u\a\a\i\c\x\y\m\f\o\l\e\r\m\s\g\k\6\k\w\0\f\i\0\a\d\y\l\z\w\o\m\2\l\x\8\m\c\d\y\y\9\2\g\a\h\r\a\y\4\7\m\f\l\g\5\2\p\b\t\l\c\p\a\t\4\g\i\f\y\a\e\4\d\n\q\o\t\f\u\i\5\e\x\a\d\x\c\1\0\j\q\w\x\i\n\n\7\f\9\m\9\x\b\9\v\r\k\y\p\d\5\r\6\1\w\9\w\x\n\5\k\m\m\1\d\s\z\k\d\1\w\j\n\w\3\5\c\r\m\9\s\r\o\f\k\4\v\i\7\4\g\9\t\9\8\k\n\p\u\h\a\4\k\f\1\a\v\v\3\6\y\2\h\f\r\s\q\l\f\r\o\o\d\h\b\9\1\3\9\i\x\v\x\1\p\4\l\5\j\9\b\i\9\2\6\q\7\3\y\g\l\7\x\p\f\8\y\x\d\j\b\v\2\0\l\f\2\l\q\f\r\w\f\i\u\r\q\0\m\j\k\5\v\v\i\5\i\0\t\2\p\7\w\4\q\l\c\u\9\m\c\0\s\n\z\v\2\a\s\k\6\1\n\3\g\7\9\h\9\v\b\a\3\g\d\z\7\7\e\7\8\5\9\6\f\g\8\p\p\9\g\f\5\6\8\6\j\d\m\v\i\s\n\9\1\w\7\7\w\q\b\i\v\r\k\5\j\z\9\b\6\x\4\b\u\i\5\v\x\i\5\9\c\t\5\w\n\u\r\i\z\k\u\c\p\u\d\a\6\v\i\r\n\2\8\t\u\h\w\h\8\z\b\d\4\o\t\5\q\i\g\r ]] 00:08:20.185 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:20.751 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:20.751 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:20.751 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:20.751 09:45:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:20.751 [2024-12-06 09:45:45.822890] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:20.751 [2024-12-06 09:45:45.823039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61191 ] 00:08:20.751 { 00:08:20.751 "subsystems": [ 00:08:20.751 { 00:08:20.751 "subsystem": "bdev", 00:08:20.751 "config": [ 00:08:20.751 { 00:08:20.751 "params": { 00:08:20.751 "block_size": 512, 00:08:20.751 "num_blocks": 1048576, 00:08:20.751 "name": "malloc0" 00:08:20.751 }, 00:08:20.751 "method": "bdev_malloc_create" 00:08:20.751 }, 00:08:20.751 { 00:08:20.751 "params": { 00:08:20.751 "filename": "/dev/zram1", 00:08:20.751 "name": "uring0" 00:08:20.751 }, 00:08:20.751 "method": "bdev_uring_create" 00:08:20.751 }, 00:08:20.751 { 00:08:20.751 "method": "bdev_wait_for_examine" 00:08:20.751 } 00:08:20.751 ] 00:08:20.751 } 00:08:20.751 ] 00:08:20.751 } 00:08:20.751 [2024-12-06 09:45:45.967781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.751 [2024-12-06 09:45:46.010935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.009 [2024-12-06 09:45:46.064070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.424  [2024-12-06T09:45:48.268Z] Copying: 169/512 [MB] (169 MBps) [2024-12-06T09:45:49.646Z] Copying: 333/512 [MB] (164 MBps) [2024-12-06T09:45:49.646Z] Copying: 490/512 [MB] (157 MBps) [2024-12-06T09:45:49.905Z] Copying: 512/512 [MB] (average 164 MBps) 00:08:24.633 00:08:24.633 09:45:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:24.633 09:45:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:24.633 09:45:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:24.633 09:45:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:24.633 09:45:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:24.633 09:45:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:24.633 09:45:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:24.633 09:45:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:24.633 [2024-12-06 09:45:49.814351] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:24.633 [2024-12-06 09:45:49.814479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61247 ] 00:08:24.633 { 00:08:24.633 "subsystems": [ 00:08:24.633 { 00:08:24.633 "subsystem": "bdev", 00:08:24.633 "config": [ 00:08:24.633 { 00:08:24.633 "params": { 00:08:24.633 "block_size": 512, 00:08:24.633 "num_blocks": 1048576, 00:08:24.633 "name": "malloc0" 00:08:24.633 }, 00:08:24.633 "method": "bdev_malloc_create" 00:08:24.633 }, 00:08:24.633 { 00:08:24.633 "params": { 00:08:24.633 "filename": "/dev/zram1", 00:08:24.633 "name": "uring0" 00:08:24.633 }, 00:08:24.633 "method": "bdev_uring_create" 00:08:24.633 }, 00:08:24.633 { 00:08:24.633 "params": { 00:08:24.633 "name": "uring0" 00:08:24.633 }, 00:08:24.633 "method": "bdev_uring_delete" 00:08:24.633 }, 00:08:24.633 { 00:08:24.633 "method": "bdev_wait_for_examine" 00:08:24.633 } 00:08:24.633 ] 00:08:24.633 } 00:08:24.633 ] 00:08:24.633 } 00:08:24.891 [2024-12-06 09:45:49.960067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.891 [2024-12-06 09:45:50.019338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.891 [2024-12-06 09:45:50.071824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.150  [2024-12-06T09:45:50.680Z] Copying: 0/0 [B] (average 0 Bps) 00:08:25.408 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.408 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.409 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.409 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.409 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.409 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.409 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.409 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.409 09:45:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:25.667 [2024-12-06 09:45:50.722879] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:25.667 [2024-12-06 09:45:50.722968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61276 ] 00:08:25.667 { 00:08:25.667 "subsystems": [ 00:08:25.667 { 00:08:25.667 "subsystem": "bdev", 00:08:25.667 "config": [ 00:08:25.667 { 00:08:25.667 "params": { 00:08:25.667 "block_size": 512, 00:08:25.667 "num_blocks": 1048576, 00:08:25.667 "name": "malloc0" 00:08:25.667 }, 00:08:25.667 "method": "bdev_malloc_create" 00:08:25.667 }, 00:08:25.667 { 00:08:25.667 "params": { 00:08:25.667 "filename": "/dev/zram1", 00:08:25.667 "name": "uring0" 00:08:25.667 }, 00:08:25.668 "method": "bdev_uring_create" 00:08:25.668 }, 00:08:25.668 { 00:08:25.668 "params": { 00:08:25.668 "name": "uring0" 00:08:25.668 }, 00:08:25.668 "method": "bdev_uring_delete" 00:08:25.668 }, 00:08:25.668 { 00:08:25.668 "method": "bdev_wait_for_examine" 00:08:25.668 } 00:08:25.668 ] 00:08:25.668 } 00:08:25.668 ] 00:08:25.668 } 00:08:25.668 [2024-12-06 09:45:50.863549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.668 [2024-12-06 09:45:50.921264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.926 [2024-12-06 09:45:50.981893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.184 [2024-12-06 09:45:51.203214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:26.184 [2024-12-06 09:45:51.203269] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:26.184 [2024-12-06 09:45:51.203279] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:08:26.184 [2024-12-06 09:45:51.203288] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.442 [2024-12-06 09:45:51.507819] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:26.442 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:26.443 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:26.443 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:26.701 ************************************ 00:08:26.701 END TEST dd_uring_copy 00:08:26.701 ************************************ 00:08:26.701 00:08:26.701 real 0m15.711s 00:08:26.701 user 0m10.702s 00:08:26.701 sys 0m13.771s 00:08:26.701 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.701 09:45:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.701 00:08:26.701 real 0m15.961s 00:08:26.701 user 0m10.836s 00:08:26.701 sys 0m13.893s 00:08:26.701 09:45:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.701 09:45:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:26.701 ************************************ 00:08:26.701 END TEST spdk_dd_uring 00:08:26.701 ************************************ 00:08:26.701 09:45:51 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:26.701 09:45:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.701 09:45:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.701 09:45:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.701 ************************************ 00:08:26.701 START TEST spdk_dd_sparse 00:08:26.701 ************************************ 00:08:26.701 09:45:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:26.701 * Looking for test storage... 00:08:26.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.701 09:45:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:26.701 09:45:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:08:26.701 09:45:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:26.960 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.961 --rc genhtml_branch_coverage=1 00:08:26.961 --rc genhtml_function_coverage=1 00:08:26.961 --rc genhtml_legend=1 00:08:26.961 --rc geninfo_all_blocks=1 00:08:26.961 --rc geninfo_unexecuted_blocks=1 00:08:26.961 00:08:26.961 ' 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.961 --rc genhtml_branch_coverage=1 00:08:26.961 --rc genhtml_function_coverage=1 00:08:26.961 --rc genhtml_legend=1 00:08:26.961 --rc geninfo_all_blocks=1 00:08:26.961 --rc geninfo_unexecuted_blocks=1 00:08:26.961 00:08:26.961 ' 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.961 --rc genhtml_branch_coverage=1 00:08:26.961 --rc genhtml_function_coverage=1 00:08:26.961 --rc genhtml_legend=1 00:08:26.961 --rc geninfo_all_blocks=1 00:08:26.961 --rc geninfo_unexecuted_blocks=1 00:08:26.961 00:08:26.961 ' 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:26.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.961 --rc genhtml_branch_coverage=1 00:08:26.961 --rc genhtml_function_coverage=1 00:08:26.961 --rc genhtml_legend=1 00:08:26.961 --rc geninfo_all_blocks=1 00:08:26.961 --rc geninfo_unexecuted_blocks=1 00:08:26.961 00:08:26.961 ' 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:26.961 1+0 records in 00:08:26.961 1+0 records out 00:08:26.961 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00599758 s, 699 MB/s 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:26.961 1+0 records in 00:08:26.961 1+0 records out 00:08:26.961 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00447053 s, 938 MB/s 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:26.961 1+0 records in 00:08:26.961 1+0 records out 00:08:26.961 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00643291 s, 652 MB/s 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 ************************************ 00:08:26.961 START TEST dd_sparse_file_to_file 00:08:26.961 ************************************ 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:26.961 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:26.961 [2024-12-06 09:45:52.159571] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:26.961 [2024-12-06 09:45:52.159684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61376 ] 00:08:26.961 { 00:08:26.961 "subsystems": [ 00:08:26.961 { 00:08:26.961 "subsystem": "bdev", 00:08:26.961 "config": [ 00:08:26.961 { 00:08:26.961 "params": { 00:08:26.961 "block_size": 4096, 00:08:26.961 "filename": "dd_sparse_aio_disk", 00:08:26.961 "name": "dd_aio" 00:08:26.961 }, 00:08:26.961 "method": "bdev_aio_create" 00:08:26.961 }, 00:08:26.961 { 00:08:26.961 "params": { 00:08:26.962 "lvs_name": "dd_lvstore", 00:08:26.962 "bdev_name": "dd_aio" 00:08:26.962 }, 00:08:26.962 "method": "bdev_lvol_create_lvstore" 00:08:26.962 }, 00:08:26.962 { 00:08:26.962 "method": "bdev_wait_for_examine" 00:08:26.962 } 00:08:26.962 ] 00:08:26.962 } 00:08:26.962 ] 00:08:26.962 } 00:08:27.220 [2024-12-06 09:45:52.298733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.220 [2024-12-06 09:45:52.351054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.220 [2024-12-06 09:45:52.402227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.477  [2024-12-06T09:45:52.750Z] Copying: 12/36 [MB] (average 666 MBps) 00:08:27.478 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:27.478 00:08:27.478 real 0m0.633s 00:08:27.478 user 0m0.387s 00:08:27.478 sys 0m0.359s 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.478 ************************************ 00:08:27.478 END TEST dd_sparse_file_to_file 00:08:27.478 ************************************ 00:08:27.478 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:27.736 ************************************ 00:08:27.736 START TEST dd_sparse_file_to_bdev 00:08:27.736 ************************************ 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:27.736 09:45:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:27.736 [2024-12-06 09:45:52.861999] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:27.736 [2024-12-06 09:45:52.862116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61419 ] 00:08:27.736 { 00:08:27.736 "subsystems": [ 00:08:27.736 { 00:08:27.736 "subsystem": "bdev", 00:08:27.736 "config": [ 00:08:27.736 { 00:08:27.736 "params": { 00:08:27.736 "block_size": 4096, 00:08:27.736 "filename": "dd_sparse_aio_disk", 00:08:27.736 "name": "dd_aio" 00:08:27.736 }, 00:08:27.736 "method": "bdev_aio_create" 00:08:27.736 }, 00:08:27.736 { 00:08:27.736 "params": { 00:08:27.736 "lvs_name": "dd_lvstore", 00:08:27.736 "lvol_name": "dd_lvol", 00:08:27.736 "size_in_mib": 36, 00:08:27.736 "thin_provision": true 00:08:27.736 }, 00:08:27.736 "method": "bdev_lvol_create" 00:08:27.736 }, 00:08:27.736 { 00:08:27.736 "method": "bdev_wait_for_examine" 00:08:27.736 } 00:08:27.736 ] 00:08:27.736 } 00:08:27.736 ] 00:08:27.736 } 00:08:27.995 [2024-12-06 09:45:53.008614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.995 [2024-12-06 09:45:53.060005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.995 [2024-12-06 09:45:53.113380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.995  [2024-12-06T09:45:53.524Z] Copying: 12/36 [MB] (average 444 MBps) 00:08:28.252 00:08:28.252 00:08:28.252 real 0m0.643s 00:08:28.252 user 0m0.396s 00:08:28.252 sys 0m0.374s 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:28.252 ************************************ 00:08:28.252 END TEST dd_sparse_file_to_bdev 00:08:28.252 ************************************ 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:28.252 ************************************ 00:08:28.252 START TEST dd_sparse_bdev_to_file 00:08:28.252 ************************************ 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:28.252 09:45:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:28.511 [2024-12-06 09:45:53.548179] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:28.511 [2024-12-06 09:45:53.548254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61457 ] 00:08:28.511 { 00:08:28.511 "subsystems": [ 00:08:28.511 { 00:08:28.511 "subsystem": "bdev", 00:08:28.511 "config": [ 00:08:28.511 { 00:08:28.511 "params": { 00:08:28.511 "block_size": 4096, 00:08:28.511 "filename": "dd_sparse_aio_disk", 00:08:28.511 "name": "dd_aio" 00:08:28.511 }, 00:08:28.511 "method": "bdev_aio_create" 00:08:28.511 }, 00:08:28.511 { 00:08:28.511 "method": "bdev_wait_for_examine" 00:08:28.511 } 00:08:28.511 ] 00:08:28.511 } 00:08:28.511 ] 00:08:28.511 } 00:08:28.511 [2024-12-06 09:45:53.687521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.511 [2024-12-06 09:45:53.745061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.770 [2024-12-06 09:45:53.798477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.770  [2024-12-06T09:45:54.301Z] Copying: 12/36 [MB] (average 800 MBps) 00:08:29.029 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:29.029 00:08:29.029 real 0m0.610s 00:08:29.029 user 0m0.377s 00:08:29.029 sys 0m0.339s 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:29.029 ************************************ 00:08:29.029 END TEST dd_sparse_bdev_to_file 00:08:29.029 ************************************ 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:29.029 ************************************ 00:08:29.029 END TEST spdk_dd_sparse 00:08:29.029 ************************************ 00:08:29.029 00:08:29.029 real 0m2.303s 00:08:29.029 user 0m1.351s 00:08:29.029 sys 0m1.292s 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.029 09:45:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:29.029 09:45:54 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:29.029 09:45:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.029 09:45:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.029 09:45:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:29.029 ************************************ 00:08:29.029 START TEST spdk_dd_negative 00:08:29.029 ************************************ 00:08:29.029 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:29.288 * Looking for test storage... 00:08:29.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.288 --rc genhtml_branch_coverage=1 00:08:29.288 --rc genhtml_function_coverage=1 00:08:29.288 --rc genhtml_legend=1 00:08:29.288 --rc geninfo_all_blocks=1 00:08:29.288 --rc geninfo_unexecuted_blocks=1 00:08:29.288 00:08:29.288 ' 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.288 --rc genhtml_branch_coverage=1 00:08:29.288 --rc genhtml_function_coverage=1 00:08:29.288 --rc genhtml_legend=1 00:08:29.288 --rc geninfo_all_blocks=1 00:08:29.288 --rc geninfo_unexecuted_blocks=1 00:08:29.288 00:08:29.288 ' 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.288 --rc genhtml_branch_coverage=1 00:08:29.288 --rc genhtml_function_coverage=1 00:08:29.288 --rc genhtml_legend=1 00:08:29.288 --rc geninfo_all_blocks=1 00:08:29.288 --rc geninfo_unexecuted_blocks=1 00:08:29.288 00:08:29.288 ' 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.288 --rc genhtml_branch_coverage=1 00:08:29.288 --rc genhtml_function_coverage=1 00:08:29.288 --rc genhtml_legend=1 00:08:29.288 --rc geninfo_all_blocks=1 00:08:29.288 --rc geninfo_unexecuted_blocks=1 00:08:29.288 00:08:29.288 ' 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.288 09:45:54 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.289 ************************************ 00:08:29.289 START TEST dd_invalid_arguments 00:08:29.289 ************************************ 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:29.289 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:29.289 00:08:29.289 CPU options: 00:08:29.289 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:29.289 (like [0,1,10]) 00:08:29.289 --lcores lcore to CPU mapping list. The list is in the format: 00:08:29.289 [<,lcores[@CPUs]>...] 00:08:29.289 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:29.289 Within the group, '-' is used for range separator, 00:08:29.289 ',' is used for single number separator. 00:08:29.289 '( )' can be omitted for single element group, 00:08:29.289 '@' can be omitted if cpus and lcores have the same value 00:08:29.289 --disable-cpumask-locks Disable CPU core lock files. 00:08:29.289 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:29.289 pollers in the app support interrupt mode) 00:08:29.289 -p, --main-core main (primary) core for DPDK 00:08:29.289 00:08:29.289 Configuration options: 00:08:29.289 -c, --config, --json JSON config file 00:08:29.289 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:29.289 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:29.289 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:29.289 --rpcs-allowed comma-separated list of permitted RPCS 00:08:29.289 --json-ignore-init-errors don't exit on invalid config entry 00:08:29.289 00:08:29.289 Memory options: 00:08:29.289 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:29.289 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:29.289 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:29.289 -R, --huge-unlink unlink huge files after initialization 00:08:29.289 -n, --mem-channels number of memory channels used for DPDK 00:08:29.289 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:29.289 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:29.289 --no-huge run without using hugepages 00:08:29.289 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:29.289 -i, --shm-id shared memory ID (optional) 00:08:29.289 -g, --single-file-segments force creating just one hugetlbfs file 00:08:29.289 00:08:29.289 PCI options: 00:08:29.289 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:29.289 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:29.289 -u, --no-pci disable PCI access 00:08:29.289 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:29.289 00:08:29.289 Log options: 00:08:29.289 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:29.289 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:29.289 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:29.289 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:29.289 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:29.289 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:29.289 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:29.289 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:29.289 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:29.289 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:29.289 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:29.289 --silence-noticelog disable notice level logging to stderr 00:08:29.289 00:08:29.289 Trace options: 00:08:29.289 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:29.289 setting 0 to disable trace (default 32768) 00:08:29.289 Tracepoints vary in size and can use more than one trace entry. 00:08:29.289 -e, --tpoint-group [:] 00:08:29.289 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:29.289 [2024-12-06 09:45:54.510521] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:08:29.289 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:29.289 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:29.289 bdev_raid, scheduler, all). 00:08:29.289 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:29.289 a tracepoint group. First tpoint inside a group can be enabled by 00:08:29.289 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:29.289 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:29.289 in /include/spdk_internal/trace_defs.h 00:08:29.289 00:08:29.289 Other options: 00:08:29.289 -h, --help show this usage 00:08:29.289 -v, --version print SPDK version 00:08:29.289 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:29.289 --env-context Opaque context for use of the env implementation 00:08:29.289 00:08:29.289 Application specific: 00:08:29.289 [--------- DD Options ---------] 00:08:29.289 --if Input file. Must specify either --if or --ib. 00:08:29.289 --ib Input bdev. Must specifier either --if or --ib 00:08:29.289 --of Output file. Must specify either --of or --ob. 00:08:29.289 --ob Output bdev. Must specify either --of or --ob. 00:08:29.289 --iflag Input file flags. 00:08:29.289 --oflag Output file flags. 00:08:29.289 --bs I/O unit size (default: 4096) 00:08:29.289 --qd Queue depth (default: 2) 00:08:29.289 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:29.289 --skip Skip this many I/O units at start of input. (default: 0) 00:08:29.289 --seek Skip this many I/O units at start of output. (default: 0) 00:08:29.289 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:29.289 --sparse Enable hole skipping in input target 00:08:29.289 Available iflag and oflag values: 00:08:29.289 append - append mode 00:08:29.289 direct - use direct I/O for data 00:08:29.289 directory - fail unless a directory 00:08:29.289 dsync - use synchronized I/O for data 00:08:29.289 noatime - do not update access time 00:08:29.289 noctty - do not assign controlling terminal from file 00:08:29.289 nofollow - do not follow symlinks 00:08:29.289 nonblock - use non-blocking I/O 00:08:29.289 sync - use synchronized I/O for data and metadata 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.289 00:08:29.289 real 0m0.082s 00:08:29.289 user 0m0.053s 00:08:29.289 sys 0m0.028s 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.289 ************************************ 00:08:29.289 END TEST dd_invalid_arguments 00:08:29.289 ************************************ 00:08:29.289 09:45:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 ************************************ 00:08:29.548 START TEST dd_double_input 00:08:29.548 ************************************ 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:29.548 [2024-12-06 09:45:54.651701] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.548 00:08:29.548 real 0m0.083s 00:08:29.548 user 0m0.053s 00:08:29.548 sys 0m0.028s 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 ************************************ 00:08:29.548 END TEST dd_double_input 00:08:29.548 ************************************ 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 ************************************ 00:08:29.548 START TEST dd_double_output 00:08:29.548 ************************************ 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:29.548 [2024-12-06 09:45:54.789580] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.548 00:08:29.548 real 0m0.082s 00:08:29.548 user 0m0.051s 00:08:29.548 sys 0m0.029s 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.548 09:45:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 ************************************ 00:08:29.548 END TEST dd_double_output 00:08:29.548 ************************************ 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.807 ************************************ 00:08:29.807 START TEST dd_no_input 00:08:29.807 ************************************ 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.807 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:29.808 [2024-12-06 09:45:54.904392] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.808 00:08:29.808 real 0m0.057s 00:08:29.808 user 0m0.033s 00:08:29.808 sys 0m0.023s 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:29.808 ************************************ 00:08:29.808 END TEST dd_no_input 00:08:29.808 ************************************ 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.808 ************************************ 00:08:29.808 START TEST dd_no_output 00:08:29.808 ************************************ 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.808 09:45:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.808 [2024-12-06 09:45:55.032268] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:08:29.808 09:45:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:08:29.808 09:45:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.808 09:45:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.808 09:45:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.808 00:08:29.808 real 0m0.086s 00:08:29.808 user 0m0.049s 00:08:29.808 sys 0m0.035s 00:08:29.808 09:45:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.808 09:45:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:29.808 ************************************ 00:08:29.808 END TEST dd_no_output 00:08:29.808 ************************************ 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.067 ************************************ 00:08:30.067 START TEST dd_wrong_blocksize 00:08:30.067 ************************************ 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:30.067 [2024-12-06 09:45:55.159946] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.067 00:08:30.067 real 0m0.074s 00:08:30.067 user 0m0.054s 00:08:30.067 sys 0m0.018s 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:30.067 ************************************ 00:08:30.067 END TEST dd_wrong_blocksize 00:08:30.067 ************************************ 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.067 ************************************ 00:08:30.067 START TEST dd_smaller_blocksize 00:08:30.067 ************************************ 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.067 09:45:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:30.067 [2024-12-06 09:45:55.282965] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:30.067 [2024-12-06 09:45:55.283080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61684 ] 00:08:30.325 [2024-12-06 09:45:55.431292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.325 [2024-12-06 09:45:55.482837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.325 [2024-12-06 09:45:55.538557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.583 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:30.841 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:31.101 [2024-12-06 09:45:56.119890] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:31.101 [2024-12-06 09:45:56.119935] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.101 [2024-12-06 09:45:56.237812] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.101 00:08:31.101 real 0m1.063s 00:08:31.101 user 0m0.383s 00:08:31.101 sys 0m0.573s 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:31.101 ************************************ 00:08:31.101 END TEST dd_smaller_blocksize 00:08:31.101 ************************************ 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.101 ************************************ 00:08:31.101 START TEST dd_invalid_count 00:08:31.101 ************************************ 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.101 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:31.361 [2024-12-06 09:45:56.409106] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.361 00:08:31.361 real 0m0.079s 00:08:31.361 user 0m0.044s 00:08:31.361 sys 0m0.034s 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.361 ************************************ 00:08:31.361 END TEST dd_invalid_count 00:08:31.361 ************************************ 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.361 ************************************ 00:08:31.361 START TEST dd_invalid_oflag 00:08:31.361 ************************************ 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:31.361 [2024-12-06 09:45:56.535407] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.361 00:08:31.361 real 0m0.072s 00:08:31.361 user 0m0.043s 00:08:31.361 sys 0m0.028s 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.361 ************************************ 00:08:31.361 END TEST dd_invalid_oflag 00:08:31.361 ************************************ 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.361 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.362 ************************************ 00:08:31.362 START TEST dd_invalid_iflag 00:08:31.362 ************************************ 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.362 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:31.621 [2024-12-06 09:45:56.653997] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.621 00:08:31.621 real 0m0.059s 00:08:31.621 user 0m0.036s 00:08:31.621 sys 0m0.022s 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:31.621 ************************************ 00:08:31.621 END TEST dd_invalid_iflag 00:08:31.621 ************************************ 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.621 ************************************ 00:08:31.621 START TEST dd_unknown_flag 00:08:31.621 ************************************ 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.621 09:45:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:31.621 [2024-12-06 09:45:56.778030] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:31.622 [2024-12-06 09:45:56.778109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61781 ] 00:08:31.881 [2024-12-06 09:45:56.923720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.881 [2024-12-06 09:45:56.969579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.881 [2024-12-06 09:45:57.022080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.881 [2024-12-06 09:45:57.056359] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:08:31.881 [2024-12-06 09:45:57.056411] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.881 [2024-12-06 09:45:57.056463] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:08:31.881 [2024-12-06 09:45:57.056475] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.881 [2024-12-06 09:45:57.056724] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:31.881 [2024-12-06 09:45:57.056740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.881 [2024-12-06 09:45:57.056790] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:31.881 [2024-12-06 09:45:57.056802] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:32.140 [2024-12-06 09:45:57.173140] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.140 00:08:32.140 real 0m0.511s 00:08:32.140 user 0m0.268s 00:08:32.140 sys 0m0.160s 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:32.140 ************************************ 00:08:32.140 END TEST dd_unknown_flag 00:08:32.140 ************************************ 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.140 ************************************ 00:08:32.140 START TEST dd_invalid_json 00:08:32.140 ************************************ 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.140 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:32.140 [2024-12-06 09:45:57.344774] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:32.140 [2024-12-06 09:45:57.344903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61810 ] 00:08:32.399 [2024-12-06 09:45:57.493189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.399 [2024-12-06 09:45:57.532692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.399 [2024-12-06 09:45:57.532777] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:32.399 [2024-12-06 09:45:57.532795] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:32.399 [2024-12-06 09:45:57.532804] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.399 [2024-12-06 09:45:57.532838] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.399 00:08:32.399 real 0m0.310s 00:08:32.399 user 0m0.138s 00:08:32.399 sys 0m0.069s 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:32.399 ************************************ 00:08:32.399 END TEST dd_invalid_json 00:08:32.399 ************************************ 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.399 ************************************ 00:08:32.399 START TEST dd_invalid_seek 00:08:32.399 ************************************ 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:32.399 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.400 09:45:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:32.659 { 00:08:32.659 "subsystems": [ 00:08:32.659 { 00:08:32.659 "subsystem": "bdev", 00:08:32.659 "config": [ 00:08:32.659 { 00:08:32.659 "params": { 00:08:32.659 "block_size": 512, 00:08:32.659 "num_blocks": 512, 00:08:32.659 "name": "malloc0" 00:08:32.659 }, 00:08:32.659 "method": "bdev_malloc_create" 00:08:32.659 }, 00:08:32.659 { 00:08:32.659 "params": { 00:08:32.659 "block_size": 512, 00:08:32.659 "num_blocks": 512, 00:08:32.659 "name": "malloc1" 00:08:32.660 }, 00:08:32.660 "method": "bdev_malloc_create" 00:08:32.660 }, 00:08:32.660 { 00:08:32.660 "method": "bdev_wait_for_examine" 00:08:32.660 } 00:08:32.660 ] 00:08:32.660 } 00:08:32.660 ] 00:08:32.660 } 00:08:32.660 [2024-12-06 09:45:57.711480] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:32.660 [2024-12-06 09:45:57.711622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61839 ] 00:08:32.660 [2024-12-06 09:45:57.854917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.660 [2024-12-06 09:45:57.892737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.918 [2024-12-06 09:45:57.951440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.918 [2024-12-06 09:45:58.016079] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:32.918 [2024-12-06 09:45:58.016148] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.918 [2024-12-06 09:45:58.131783] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.177 00:08:33.177 real 0m0.547s 00:08:33.177 user 0m0.329s 00:08:33.177 sys 0m0.174s 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:33.177 ************************************ 00:08:33.177 END TEST dd_invalid_seek 00:08:33.177 ************************************ 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.177 ************************************ 00:08:33.177 START TEST dd_invalid_skip 00:08:33.177 ************************************ 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:33.177 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.178 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:33.178 { 00:08:33.178 "subsystems": [ 00:08:33.178 { 00:08:33.178 "subsystem": "bdev", 00:08:33.178 "config": [ 00:08:33.178 { 00:08:33.178 "params": { 00:08:33.178 "block_size": 512, 00:08:33.178 "num_blocks": 512, 00:08:33.178 "name": "malloc0" 00:08:33.178 }, 00:08:33.178 "method": "bdev_malloc_create" 00:08:33.178 }, 00:08:33.178 { 00:08:33.178 "params": { 00:08:33.178 "block_size": 512, 00:08:33.178 "num_blocks": 512, 00:08:33.178 "name": "malloc1" 00:08:33.178 }, 00:08:33.178 "method": "bdev_malloc_create" 00:08:33.178 }, 00:08:33.178 { 00:08:33.178 "method": "bdev_wait_for_examine" 00:08:33.178 } 00:08:33.178 ] 00:08:33.178 } 00:08:33.178 ] 00:08:33.178 } 00:08:33.178 [2024-12-06 09:45:58.307254] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:33.178 [2024-12-06 09:45:58.307345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61873 ] 00:08:33.437 [2024-12-06 09:45:58.452408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.437 [2024-12-06 09:45:58.510711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.437 [2024-12-06 09:45:58.565526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.437 [2024-12-06 09:45:58.626538] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:33.437 [2024-12-06 09:45:58.626597] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.696 [2024-12-06 09:45:58.742444] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.696 00:08:33.696 real 0m0.555s 00:08:33.696 user 0m0.356s 00:08:33.696 sys 0m0.156s 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:33.696 ************************************ 00:08:33.696 END TEST dd_invalid_skip 00:08:33.696 ************************************ 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.696 ************************************ 00:08:33.696 START TEST dd_invalid_input_count 00:08:33.696 ************************************ 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:33.696 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.697 09:45:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:33.697 { 00:08:33.697 "subsystems": [ 00:08:33.697 { 00:08:33.697 "subsystem": "bdev", 00:08:33.697 "config": [ 00:08:33.697 { 00:08:33.697 "params": { 00:08:33.697 "block_size": 512, 00:08:33.697 "num_blocks": 512, 00:08:33.697 "name": "malloc0" 00:08:33.697 }, 00:08:33.697 "method": "bdev_malloc_create" 00:08:33.697 }, 00:08:33.697 { 00:08:33.697 "params": { 00:08:33.697 "block_size": 512, 00:08:33.697 "num_blocks": 512, 00:08:33.697 "name": "malloc1" 00:08:33.697 }, 00:08:33.697 "method": "bdev_malloc_create" 00:08:33.697 }, 00:08:33.697 { 00:08:33.697 "method": "bdev_wait_for_examine" 00:08:33.697 } 00:08:33.697 ] 00:08:33.697 } 00:08:33.697 ] 00:08:33.697 } 00:08:33.697 [2024-12-06 09:45:58.925821] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:33.697 [2024-12-06 09:45:58.925909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61906 ] 00:08:33.956 [2024-12-06 09:45:59.072704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.956 [2024-12-06 09:45:59.117542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.956 [2024-12-06 09:45:59.172909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.215 [2024-12-06 09:45:59.236519] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:34.215 [2024-12-06 09:45:59.236586] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.215 [2024-12-06 09:45:59.357026] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.215 00:08:34.215 real 0m0.560s 00:08:34.215 user 0m0.359s 00:08:34.215 sys 0m0.164s 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:34.215 ************************************ 00:08:34.215 END TEST dd_invalid_input_count 00:08:34.215 ************************************ 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.215 ************************************ 00:08:34.215 START TEST dd_invalid_output_count 00:08:34.215 ************************************ 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:34.215 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.216 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.475 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.475 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.475 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.475 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.475 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.475 09:45:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:34.475 { 00:08:34.475 "subsystems": [ 00:08:34.475 { 00:08:34.475 "subsystem": "bdev", 00:08:34.475 "config": [ 00:08:34.475 { 00:08:34.475 "params": { 00:08:34.475 "block_size": 512, 00:08:34.475 "num_blocks": 512, 00:08:34.475 "name": "malloc0" 00:08:34.475 }, 00:08:34.475 "method": "bdev_malloc_create" 00:08:34.475 }, 00:08:34.475 { 00:08:34.475 "method": "bdev_wait_for_examine" 00:08:34.475 } 00:08:34.475 ] 00:08:34.475 } 00:08:34.475 ] 00:08:34.475 } 00:08:34.475 [2024-12-06 09:45:59.542912] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:34.475 [2024-12-06 09:45:59.543008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61940 ] 00:08:34.475 [2024-12-06 09:45:59.687686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.475 [2024-12-06 09:45:59.741796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.734 [2024-12-06 09:45:59.794761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.734 [2024-12-06 09:45:59.850622] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:34.734 [2024-12-06 09:45:59.850688] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.734 [2024-12-06 09:45:59.966989] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.993 00:08:34.993 real 0m0.552s 00:08:34.993 user 0m0.347s 00:08:34.993 sys 0m0.165s 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:34.993 ************************************ 00:08:34.993 END TEST dd_invalid_output_count 00:08:34.993 ************************************ 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.993 ************************************ 00:08:34.993 START TEST dd_bs_not_multiple 00:08:34.993 ************************************ 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:34.993 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.994 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:34.994 { 00:08:34.994 "subsystems": [ 00:08:34.994 { 00:08:34.994 "subsystem": "bdev", 00:08:34.994 "config": [ 00:08:34.994 { 00:08:34.994 "params": { 00:08:34.994 "block_size": 512, 00:08:34.994 "num_blocks": 512, 00:08:34.994 "name": "malloc0" 00:08:34.994 }, 00:08:34.994 "method": "bdev_malloc_create" 00:08:34.994 }, 00:08:34.994 { 00:08:34.994 "params": { 00:08:34.994 "block_size": 512, 00:08:34.994 "num_blocks": 512, 00:08:34.994 "name": "malloc1" 00:08:34.994 }, 00:08:34.994 "method": "bdev_malloc_create" 00:08:34.994 }, 00:08:34.994 { 00:08:34.994 "method": "bdev_wait_for_examine" 00:08:34.994 } 00:08:34.994 ] 00:08:34.994 } 00:08:34.994 ] 00:08:34.994 } 00:08:34.994 [2024-12-06 09:46:00.166614] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:34.994 [2024-12-06 09:46:00.166775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61971 ] 00:08:35.253 [2024-12-06 09:46:00.315551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.253 [2024-12-06 09:46:00.368520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.253 [2024-12-06 09:46:00.421885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.253 [2024-12-06 09:46:00.483478] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:35.253 [2024-12-06 09:46:00.483536] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.512 [2024-12-06 09:46:00.607747] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.512 00:08:35.512 real 0m0.584s 00:08:35.512 user 0m0.372s 00:08:35.512 sys 0m0.173s 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:35.512 ************************************ 00:08:35.512 END TEST dd_bs_not_multiple 00:08:35.512 ************************************ 00:08:35.512 00:08:35.512 real 0m6.482s 00:08:35.512 user 0m3.368s 00:08:35.512 sys 0m2.526s 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.512 09:46:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.512 ************************************ 00:08:35.512 END TEST spdk_dd_negative 00:08:35.512 ************************************ 00:08:35.512 00:08:35.512 real 1m17.948s 00:08:35.512 user 0m49.317s 00:08:35.512 sys 0m36.306s 00:08:35.512 09:46:00 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.512 ************************************ 00:08:35.512 09:46:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:35.512 END TEST spdk_dd 00:08:35.512 ************************************ 00:08:35.775 09:46:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:35.775 09:46:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:35.775 09:46:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:35.775 09:46:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.775 09:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:35.775 09:46:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:35.775 09:46:00 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:35.775 09:46:00 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:35.775 09:46:00 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:35.775 09:46:00 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:35.775 09:46:00 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:35.775 09:46:00 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:35.775 09:46:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.775 09:46:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.775 09:46:00 -- common/autotest_common.sh@10 -- # set +x 00:08:35.775 ************************************ 00:08:35.775 START TEST nvmf_tcp 00:08:35.775 ************************************ 00:08:35.775 09:46:00 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:35.775 * Looking for test storage... 00:08:35.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:35.775 09:46:00 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.775 09:46:00 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.775 09:46:00 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.775 09:46:01 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:35.776 09:46:01 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.041 09:46:01 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:36.041 09:46:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:36.041 09:46:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.041 09:46:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:36.041 09:46:01 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.041 09:46:01 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.041 09:46:01 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.041 09:46:01 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:36.041 09:46:01 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.041 09:46:01 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.041 --rc genhtml_branch_coverage=1 00:08:36.041 --rc genhtml_function_coverage=1 00:08:36.041 --rc genhtml_legend=1 00:08:36.041 --rc geninfo_all_blocks=1 00:08:36.041 --rc geninfo_unexecuted_blocks=1 00:08:36.041 00:08:36.041 ' 00:08:36.041 09:46:01 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.041 --rc genhtml_branch_coverage=1 00:08:36.041 --rc genhtml_function_coverage=1 00:08:36.041 --rc genhtml_legend=1 00:08:36.041 --rc geninfo_all_blocks=1 00:08:36.041 --rc geninfo_unexecuted_blocks=1 00:08:36.041 00:08:36.041 ' 00:08:36.041 09:46:01 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.041 --rc genhtml_branch_coverage=1 00:08:36.041 --rc genhtml_function_coverage=1 00:08:36.041 --rc genhtml_legend=1 00:08:36.041 --rc geninfo_all_blocks=1 00:08:36.041 --rc geninfo_unexecuted_blocks=1 00:08:36.041 00:08:36.041 ' 00:08:36.041 09:46:01 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.041 --rc genhtml_branch_coverage=1 00:08:36.041 --rc genhtml_function_coverage=1 00:08:36.041 --rc genhtml_legend=1 00:08:36.041 --rc geninfo_all_blocks=1 00:08:36.041 --rc geninfo_unexecuted_blocks=1 00:08:36.041 00:08:36.041 ' 00:08:36.041 09:46:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:36.041 09:46:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:36.041 09:46:01 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:36.041 09:46:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.041 09:46:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.041 09:46:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.041 ************************************ 00:08:36.041 START TEST nvmf_target_core 00:08:36.041 ************************************ 00:08:36.041 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:36.041 * Looking for test storage... 00:08:36.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:36.041 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:36.041 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:08:36.041 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.042 --rc genhtml_branch_coverage=1 00:08:36.042 --rc genhtml_function_coverage=1 00:08:36.042 --rc genhtml_legend=1 00:08:36.042 --rc geninfo_all_blocks=1 00:08:36.042 --rc geninfo_unexecuted_blocks=1 00:08:36.042 00:08:36.042 ' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.042 --rc genhtml_branch_coverage=1 00:08:36.042 --rc genhtml_function_coverage=1 00:08:36.042 --rc genhtml_legend=1 00:08:36.042 --rc geninfo_all_blocks=1 00:08:36.042 --rc geninfo_unexecuted_blocks=1 00:08:36.042 00:08:36.042 ' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.042 --rc genhtml_branch_coverage=1 00:08:36.042 --rc genhtml_function_coverage=1 00:08:36.042 --rc genhtml_legend=1 00:08:36.042 --rc geninfo_all_blocks=1 00:08:36.042 --rc geninfo_unexecuted_blocks=1 00:08:36.042 00:08:36.042 ' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.042 --rc genhtml_branch_coverage=1 00:08:36.042 --rc genhtml_function_coverage=1 00:08:36.042 --rc genhtml_legend=1 00:08:36.042 --rc geninfo_all_blocks=1 00:08:36.042 --rc geninfo_unexecuted_blocks=1 00:08:36.042 00:08:36.042 ' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.042 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.042 ************************************ 00:08:36.042 START TEST nvmf_host_management 00:08:36.042 ************************************ 00:08:36.042 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:36.301 * Looking for test storage... 00:08:36.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:36.301 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.302 --rc genhtml_branch_coverage=1 00:08:36.302 --rc genhtml_function_coverage=1 00:08:36.302 --rc genhtml_legend=1 00:08:36.302 --rc geninfo_all_blocks=1 00:08:36.302 --rc geninfo_unexecuted_blocks=1 00:08:36.302 00:08:36.302 ' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.302 --rc genhtml_branch_coverage=1 00:08:36.302 --rc genhtml_function_coverage=1 00:08:36.302 --rc genhtml_legend=1 00:08:36.302 --rc geninfo_all_blocks=1 00:08:36.302 --rc geninfo_unexecuted_blocks=1 00:08:36.302 00:08:36.302 ' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.302 --rc genhtml_branch_coverage=1 00:08:36.302 --rc genhtml_function_coverage=1 00:08:36.302 --rc genhtml_legend=1 00:08:36.302 --rc geninfo_all_blocks=1 00:08:36.302 --rc geninfo_unexecuted_blocks=1 00:08:36.302 00:08:36.302 ' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.302 --rc genhtml_branch_coverage=1 00:08:36.302 --rc genhtml_function_coverage=1 00:08:36.302 --rc genhtml_legend=1 00:08:36.302 --rc geninfo_all_blocks=1 00:08:36.302 --rc geninfo_unexecuted_blocks=1 00:08:36.302 00:08:36.302 ' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.302 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.302 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:36.303 Cannot find device "nvmf_init_br" 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:36.303 Cannot find device "nvmf_init_br2" 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:36.303 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:36.561 Cannot find device "nvmf_tgt_br" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:36.561 Cannot find device "nvmf_tgt_br2" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:36.561 Cannot find device "nvmf_init_br" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:36.561 Cannot find device "nvmf_init_br2" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:36.561 Cannot find device "nvmf_tgt_br" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:36.561 Cannot find device "nvmf_tgt_br2" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:36.561 Cannot find device "nvmf_br" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:36.561 Cannot find device "nvmf_init_if" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:36.561 Cannot find device "nvmf_init_if2" 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:36.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:36.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:36.561 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:36.562 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:36.562 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:36.562 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.562 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.562 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.562 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:36.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:08:36.821 00:08:36.821 --- 10.0.0.3 ping statistics --- 00:08:36.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.821 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:36.821 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:36.821 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:08:36.821 00:08:36.821 --- 10.0.0.4 ping statistics --- 00:08:36.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.821 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:36.821 00:08:36.821 --- 10.0.0.1 ping statistics --- 00:08:36.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.821 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:36.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:08:36.821 00:08:36.821 --- 10.0.0.2 ping statistics --- 00:08:36.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.821 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.821 09:46:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62320 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62320 00:08:36.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62320 ']' 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.821 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.821 [2024-12-06 09:46:02.088839] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:36.821 [2024-12-06 09:46:02.088941] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.081 [2024-12-06 09:46:02.245501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.081 [2024-12-06 09:46:02.333115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.081 [2024-12-06 09:46:02.333458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.081 [2024-12-06 09:46:02.333790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.081 [2024-12-06 09:46:02.333945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.081 [2024-12-06 09:46:02.334147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.081 [2024-12-06 09:46:02.335776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.081 [2024-12-06 09:46:02.336000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.081 [2024-12-06 09:46:02.335851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.081 [2024-12-06 09:46:02.335993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:37.341 [2024-12-06 09:46:02.414327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.910 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.910 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:37.910 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:37.910 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.910 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.170 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.170 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.170 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.171 [2024-12-06 09:46:03.202379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.171 Malloc0 00:08:38.171 [2024-12-06 09:46:03.291605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62380 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62380 /var/tmp/bdevperf.sock 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62380 ']' 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.171 { 00:08:38.171 "params": { 00:08:38.171 "name": "Nvme$subsystem", 00:08:38.171 "trtype": "$TEST_TRANSPORT", 00:08:38.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.171 "adrfam": "ipv4", 00:08:38.171 "trsvcid": "$NVMF_PORT", 00:08:38.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.171 "hdgst": ${hdgst:-false}, 00:08:38.171 "ddgst": ${ddgst:-false} 00:08:38.171 }, 00:08:38.171 "method": "bdev_nvme_attach_controller" 00:08:38.171 } 00:08:38.171 EOF 00:08:38.171 )") 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:38.171 09:46:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.171 "params": { 00:08:38.171 "name": "Nvme0", 00:08:38.171 "trtype": "tcp", 00:08:38.171 "traddr": "10.0.0.3", 00:08:38.171 "adrfam": "ipv4", 00:08:38.171 "trsvcid": "4420", 00:08:38.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:38.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:38.171 "hdgst": false, 00:08:38.171 "ddgst": false 00:08:38.171 }, 00:08:38.171 "method": "bdev_nvme_attach_controller" 00:08:38.171 }' 00:08:38.171 [2024-12-06 09:46:03.407769] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:38.171 [2024-12-06 09:46:03.407864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62380 ] 00:08:38.430 [2024-12-06 09:46:03.561940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.431 [2024-12-06 09:46:03.624553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.431 [2024-12-06 09:46:03.693772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.689 Running I/O for 10 seconds... 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:39.259 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.260 09:46:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:39.520 [2024-12-06 09:46:04.530124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.520 [2024-12-06 09:46:04.530182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.530983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.530993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.521 [2024-12-06 09:46:04.531220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.521 [2024-12-06 09:46:04.531229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.522 [2024-12-06 09:46:04.531841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.531851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a7c00 is same with the state(6) to be set 00:08:39.522 [2024-12-06 09:46:04.532116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.522 [2024-12-06 09:46:04.532135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.532145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.522 [2024-12-06 09:46:04.532168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.532177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.522 [2024-12-06 09:46:04.532185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.532194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:39.522 [2024-12-06 09:46:04.532201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.522 [2024-12-06 09:46:04.532209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a8ce0 is same with the state(6) to be set 00:08:39.522 [2024-12-06 09:46:04.533431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:39.522 task offset: 0 on job bdev=Nvme0n1 fails 00:08:39.522 00:08:39.522 Latency(us) 00:08:39.522 [2024-12-06T09:46:04.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.522 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:39.522 Job: Nvme0n1 ended in about 0.71 seconds with error 00:08:39.522 Verification LBA range: start 0x0 length 0x400 00:08:39.522 Nvme0n1 : 0.71 1452.33 90.77 90.77 0.00 40479.98 2651.23 40274.85 00:08:39.522 [2024-12-06T09:46:04.794Z] =================================================================================================================== 00:08:39.522 [2024-12-06T09:46:04.794Z] Total : 1452.33 90.77 90.77 0.00 40479.98 2651.23 40274.85 00:08:39.522 [2024-12-06 09:46:04.535637] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.522 [2024-12-06 09:46:04.535698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a8ce0 (9): Bad file descriptor 00:08:39.522 [2024-12-06 09:46:04.544593] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62380 00:08:40.461 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62380) - No such process 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.461 { 00:08:40.461 "params": { 00:08:40.461 "name": "Nvme$subsystem", 00:08:40.461 "trtype": "$TEST_TRANSPORT", 00:08:40.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.461 "adrfam": "ipv4", 00:08:40.461 "trsvcid": "$NVMF_PORT", 00:08:40.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.461 "hdgst": ${hdgst:-false}, 00:08:40.461 "ddgst": ${ddgst:-false} 00:08:40.461 }, 00:08:40.461 "method": "bdev_nvme_attach_controller" 00:08:40.461 } 00:08:40.461 EOF 00:08:40.461 )") 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:40.461 09:46:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.461 "params": { 00:08:40.461 "name": "Nvme0", 00:08:40.461 "trtype": "tcp", 00:08:40.461 "traddr": "10.0.0.3", 00:08:40.461 "adrfam": "ipv4", 00:08:40.461 "trsvcid": "4420", 00:08:40.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:40.461 "hdgst": false, 00:08:40.461 "ddgst": false 00:08:40.461 }, 00:08:40.461 "method": "bdev_nvme_attach_controller" 00:08:40.461 }' 00:08:40.461 [2024-12-06 09:46:05.602068] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:40.461 [2024-12-06 09:46:05.602195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62418 ] 00:08:40.721 [2024-12-06 09:46:05.748868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.721 [2024-12-06 09:46:05.793944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.721 [2024-12-06 09:46:05.857382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.721 Running I/O for 1 seconds... 00:08:42.102 1600.00 IOPS, 100.00 MiB/s 00:08:42.102 Latency(us) 00:08:42.102 [2024-12-06T09:46:07.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.102 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:42.102 Verification LBA range: start 0x0 length 0x400 00:08:42.102 Nvme0n1 : 1.04 1604.52 100.28 0.00 0.00 39193.52 3768.32 36700.16 00:08:42.102 [2024-12-06T09:46:07.374Z] =================================================================================================================== 00:08:42.102 [2024-12-06T09:46:07.374Z] Total : 1604.52 100.28 0.00 0.00 39193.52 3768.32 36700.16 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.102 rmmod nvme_tcp 00:08:42.102 rmmod nvme_fabrics 00:08:42.102 rmmod nvme_keyring 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62320 ']' 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62320 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62320 ']' 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62320 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.102 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62320 00:08:42.361 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:42.361 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:42.361 killing process with pid 62320 00:08:42.361 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62320' 00:08:42.361 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62320 00:08:42.361 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62320 00:08:42.621 [2024-12-06 09:46:07.668944] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:42.621 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:42.880 00:08:42.880 real 0m6.674s 00:08:42.880 user 0m24.290s 00:08:42.880 sys 0m1.728s 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.880 ************************************ 00:08:42.880 09:46:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.880 END TEST nvmf_host_management 00:08:42.880 ************************************ 00:08:42.880 09:46:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:42.880 09:46:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.880 09:46:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.880 09:46:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.880 ************************************ 00:08:42.880 START TEST nvmf_lvol 00:08:42.880 ************************************ 00:08:42.880 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:42.880 * Looking for test storage... 00:08:42.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:42.880 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.880 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.880 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.139 --rc genhtml_branch_coverage=1 00:08:43.139 --rc genhtml_function_coverage=1 00:08:43.139 --rc genhtml_legend=1 00:08:43.139 --rc geninfo_all_blocks=1 00:08:43.139 --rc geninfo_unexecuted_blocks=1 00:08:43.139 00:08:43.139 ' 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.139 --rc genhtml_branch_coverage=1 00:08:43.139 --rc genhtml_function_coverage=1 00:08:43.139 --rc genhtml_legend=1 00:08:43.139 --rc geninfo_all_blocks=1 00:08:43.139 --rc geninfo_unexecuted_blocks=1 00:08:43.139 00:08:43.139 ' 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.139 --rc genhtml_branch_coverage=1 00:08:43.139 --rc genhtml_function_coverage=1 00:08:43.139 --rc genhtml_legend=1 00:08:43.139 --rc geninfo_all_blocks=1 00:08:43.139 --rc geninfo_unexecuted_blocks=1 00:08:43.139 00:08:43.139 ' 00:08:43.139 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.140 --rc genhtml_branch_coverage=1 00:08:43.140 --rc genhtml_function_coverage=1 00:08:43.140 --rc genhtml_legend=1 00:08:43.140 --rc geninfo_all_blocks=1 00:08:43.140 --rc geninfo_unexecuted_blocks=1 00:08:43.140 00:08:43.140 ' 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.140 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:43.140 Cannot find device "nvmf_init_br" 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:43.140 Cannot find device "nvmf_init_br2" 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:43.140 Cannot find device "nvmf_tgt_br" 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.140 Cannot find device "nvmf_tgt_br2" 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:43.140 Cannot find device "nvmf_init_br" 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:43.140 Cannot find device "nvmf_init_br2" 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:43.140 Cannot find device "nvmf_tgt_br" 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:43.140 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:43.140 Cannot find device "nvmf_tgt_br2" 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:43.141 Cannot find device "nvmf_br" 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:43.141 Cannot find device "nvmf_init_if" 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:43.141 Cannot find device "nvmf_init_if2" 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.141 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:43.399 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.399 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:08:43.399 00:08:43.399 --- 10.0.0.3 ping statistics --- 00:08:43.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.399 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:43.399 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:43.399 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:08:43.399 00:08:43.399 --- 10.0.0.4 ping statistics --- 00:08:43.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.399 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:43.399 00:08:43.399 --- 10.0.0.1 ping statistics --- 00:08:43.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.399 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:43.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:08:43.399 00:08:43.399 --- 10.0.0.2 ping statistics --- 00:08:43.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.399 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.399 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62692 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62692 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62692 ']' 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.658 09:46:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.658 [2024-12-06 09:46:08.731051] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:43.658 [2024-12-06 09:46:08.731502] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.658 [2024-12-06 09:46:08.882835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.916 [2024-12-06 09:46:08.938135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.916 [2024-12-06 09:46:08.938701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.916 [2024-12-06 09:46:08.939057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.916 [2024-12-06 09:46:08.939358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.916 [2024-12-06 09:46:08.939651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.917 [2024-12-06 09:46:08.940979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.917 [2024-12-06 09:46:08.941131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.917 [2024-12-06 09:46:08.941323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.917 [2024-12-06 09:46:08.999284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.917 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.917 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:43.917 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.917 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.917 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.917 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.917 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:44.176 [2024-12-06 09:46:09.423718] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.434 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.693 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:44.693 09:46:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.953 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:44.953 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:45.211 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:45.470 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6434a2d4-8ab9-45b5-87b4-6abe71f36d1f 00:08:45.470 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6434a2d4-8ab9-45b5-87b4-6abe71f36d1f lvol 20 00:08:45.729 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=67a011dd-7966-455b-8105-5682d3027f6d 00:08:45.729 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.987 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67a011dd-7966-455b-8105-5682d3027f6d 00:08:46.245 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:46.503 [2024-12-06 09:46:11.670157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:46.503 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:46.761 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:46.761 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62760 00:08:46.761 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:47.695 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 67a011dd-7966-455b-8105-5682d3027f6d MY_SNAPSHOT 00:08:48.263 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=94ec6035-6244-43db-8fc6-5c0ed4bce24e 00:08:48.263 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 67a011dd-7966-455b-8105-5682d3027f6d 30 00:08:48.521 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 94ec6035-6244-43db-8fc6-5c0ed4bce24e MY_CLONE 00:08:48.780 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2fc3556c-5303-451d-9276-845c813cae5c 00:08:48.780 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2fc3556c-5303-451d-9276-845c813cae5c 00:08:49.348 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62760 00:08:57.536 Initializing NVMe Controllers 00:08:57.536 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:57.536 Controller IO queue size 128, less than required. 00:08:57.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:57.537 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:57.537 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:57.537 Initialization complete. Launching workers. 00:08:57.537 ======================================================== 00:08:57.537 Latency(us) 00:08:57.537 Device Information : IOPS MiB/s Average min max 00:08:57.537 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7036.99 27.49 18215.15 2162.09 99055.27 00:08:57.537 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 6579.39 25.70 19478.09 4815.47 105800.35 00:08:57.537 ======================================================== 00:08:57.537 Total : 13616.39 53.19 18825.40 2162.09 105800.35 00:08:57.537 00:08:57.537 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.537 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 67a011dd-7966-455b-8105-5682d3027f6d 00:08:57.796 09:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6434a2d4-8ab9-45b5-87b4-6abe71f36d1f 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.056 rmmod nvme_tcp 00:08:58.056 rmmod nvme_fabrics 00:08:58.056 rmmod nvme_keyring 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62692 ']' 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62692 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62692 ']' 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62692 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.056 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62692 00:08:58.315 killing process with pid 62692 00:08:58.315 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.315 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.315 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62692' 00:08:58.315 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62692 00:08:58.315 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62692 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.575 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.835 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:58.835 ************************************ 00:08:58.835 END TEST nvmf_lvol 00:08:58.835 ************************************ 00:08:58.835 00:08:58.835 real 0m15.815s 00:08:58.835 user 1m5.619s 00:08:58.835 sys 0m3.706s 00:08:58.835 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.835 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.835 09:46:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.835 09:46:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.835 09:46:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.836 09:46:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.836 ************************************ 00:08:58.836 START TEST nvmf_lvs_grow 00:08:58.836 ************************************ 00:08:58.836 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.836 * Looking for test storage... 00:08:58.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.836 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.836 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.836 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.836 --rc genhtml_branch_coverage=1 00:08:58.836 --rc genhtml_function_coverage=1 00:08:58.836 --rc genhtml_legend=1 00:08:58.836 --rc geninfo_all_blocks=1 00:08:58.836 --rc geninfo_unexecuted_blocks=1 00:08:58.836 00:08:58.836 ' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.836 --rc genhtml_branch_coverage=1 00:08:58.836 --rc genhtml_function_coverage=1 00:08:58.836 --rc genhtml_legend=1 00:08:58.836 --rc geninfo_all_blocks=1 00:08:58.836 --rc geninfo_unexecuted_blocks=1 00:08:58.836 00:08:58.836 ' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.836 --rc genhtml_branch_coverage=1 00:08:58.836 --rc genhtml_function_coverage=1 00:08:58.836 --rc genhtml_legend=1 00:08:58.836 --rc geninfo_all_blocks=1 00:08:58.836 --rc geninfo_unexecuted_blocks=1 00:08:58.836 00:08:58.836 ' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.836 --rc genhtml_branch_coverage=1 00:08:58.836 --rc genhtml_function_coverage=1 00:08:58.836 --rc genhtml_legend=1 00:08:58.836 --rc geninfo_all_blocks=1 00:08:58.836 --rc geninfo_unexecuted_blocks=1 00:08:58.836 00:08:58.836 ' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.836 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.836 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.837 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:59.096 Cannot find device "nvmf_init_br" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:59.096 Cannot find device "nvmf_init_br2" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:59.096 Cannot find device "nvmf_tgt_br" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:59.096 Cannot find device "nvmf_tgt_br2" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:59.096 Cannot find device "nvmf_init_br" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:59.096 Cannot find device "nvmf_init_br2" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:59.096 Cannot find device "nvmf_tgt_br" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:59.096 Cannot find device "nvmf_tgt_br2" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:59.096 Cannot find device "nvmf_br" 00:08:59.096 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:59.097 Cannot find device "nvmf_init_if" 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:59.097 Cannot find device "nvmf_init_if2" 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:59.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:59.097 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:59.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:08:59.356 00:08:59.356 --- 10.0.0.3 ping statistics --- 00:08:59.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.356 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:59.356 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:59.356 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:08:59.356 00:08:59.356 --- 10.0.0.4 ping statistics --- 00:08:59.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.356 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:59.356 00:08:59.356 --- 10.0.0.1 ping statistics --- 00:08:59.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.356 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:59.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:08:59.356 00:08:59.356 --- 10.0.0.2 ping statistics --- 00:08:59.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.356 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63138 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63138 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63138 ']' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.356 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.356 [2024-12-06 09:46:24.615406] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:59.356 [2024-12-06 09:46:24.615660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.615 [2024-12-06 09:46:24.753809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.615 [2024-12-06 09:46:24.811150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.615 [2024-12-06 09:46:24.811374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.615 [2024-12-06 09:46:24.811541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.615 [2024-12-06 09:46:24.811720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.615 [2024-12-06 09:46:24.811733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.615 [2024-12-06 09:46:24.812173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.615 [2024-12-06 09:46:24.868856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.550 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.550 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:00.550 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.550 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.550 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.550 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.550 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.809 [2024-12-06 09:46:25.945416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 ************************************ 00:09:00.809 START TEST lvs_grow_clean 00:09:00.809 ************************************ 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:00.809 09:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.378 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:01.378 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:01.378 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:01.378 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:01.378 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:01.637 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:01.637 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:01.637 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 lvol 150 00:09:02.206 09:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=40eb8d03-185b-4f3a-8b89-801eb5719c2e 00:09:02.206 09:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:02.206 09:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:02.206 [2024-12-06 09:46:27.378477] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:02.206 [2024-12-06 09:46:27.378591] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:02.206 true 00:09:02.206 09:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:02.206 09:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:02.465 09:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:02.465 09:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:02.724 09:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40eb8d03-185b-4f3a-8b89-801eb5719c2e 00:09:02.983 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:03.242 [2024-12-06 09:46:28.447124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:03.242 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:03.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63226 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63226 /var/tmp/bdevperf.sock 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63226 ']' 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.501 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:03.761 [2024-12-06 09:46:28.808078] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:03.761 [2024-12-06 09:46:28.808458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63226 ] 00:09:03.761 [2024-12-06 09:46:28.955026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.019 [2024-12-06 09:46:29.039668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.019 [2024-12-06 09:46:29.117036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.586 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.586 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:04.586 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:05.169 Nvme0n1 00:09:05.169 09:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:05.169 [ 00:09:05.169 { 00:09:05.169 "name": "Nvme0n1", 00:09:05.169 "aliases": [ 00:09:05.169 "40eb8d03-185b-4f3a-8b89-801eb5719c2e" 00:09:05.169 ], 00:09:05.169 "product_name": "NVMe disk", 00:09:05.169 "block_size": 4096, 00:09:05.169 "num_blocks": 38912, 00:09:05.169 "uuid": "40eb8d03-185b-4f3a-8b89-801eb5719c2e", 00:09:05.169 "numa_id": -1, 00:09:05.169 "assigned_rate_limits": { 00:09:05.169 "rw_ios_per_sec": 0, 00:09:05.169 "rw_mbytes_per_sec": 0, 00:09:05.169 "r_mbytes_per_sec": 0, 00:09:05.169 "w_mbytes_per_sec": 0 00:09:05.169 }, 00:09:05.169 "claimed": false, 00:09:05.169 "zoned": false, 00:09:05.169 "supported_io_types": { 00:09:05.169 "read": true, 00:09:05.170 "write": true, 00:09:05.170 "unmap": true, 00:09:05.170 "flush": true, 00:09:05.170 "reset": true, 00:09:05.170 "nvme_admin": true, 00:09:05.170 "nvme_io": true, 00:09:05.170 "nvme_io_md": false, 00:09:05.170 "write_zeroes": true, 00:09:05.170 "zcopy": false, 00:09:05.170 "get_zone_info": false, 00:09:05.170 "zone_management": false, 00:09:05.170 "zone_append": false, 00:09:05.170 "compare": true, 00:09:05.170 "compare_and_write": true, 00:09:05.170 "abort": true, 00:09:05.170 "seek_hole": false, 00:09:05.170 "seek_data": false, 00:09:05.170 "copy": true, 00:09:05.170 "nvme_iov_md": false 00:09:05.170 }, 00:09:05.170 "memory_domains": [ 00:09:05.170 { 00:09:05.170 "dma_device_id": "system", 00:09:05.170 "dma_device_type": 1 00:09:05.170 } 00:09:05.170 ], 00:09:05.170 "driver_specific": { 00:09:05.170 "nvme": [ 00:09:05.170 { 00:09:05.170 "trid": { 00:09:05.170 "trtype": "TCP", 00:09:05.170 "adrfam": "IPv4", 00:09:05.170 "traddr": "10.0.0.3", 00:09:05.170 "trsvcid": "4420", 00:09:05.170 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:05.170 }, 00:09:05.170 "ctrlr_data": { 00:09:05.170 "cntlid": 1, 00:09:05.170 "vendor_id": "0x8086", 00:09:05.170 "model_number": "SPDK bdev Controller", 00:09:05.170 "serial_number": "SPDK0", 00:09:05.170 "firmware_revision": "25.01", 00:09:05.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.170 "oacs": { 00:09:05.170 "security": 0, 00:09:05.170 "format": 0, 00:09:05.170 "firmware": 0, 00:09:05.170 "ns_manage": 0 00:09:05.170 }, 00:09:05.170 "multi_ctrlr": true, 00:09:05.170 "ana_reporting": false 00:09:05.170 }, 00:09:05.170 "vs": { 00:09:05.170 "nvme_version": "1.3" 00:09:05.170 }, 00:09:05.170 "ns_data": { 00:09:05.170 "id": 1, 00:09:05.170 "can_share": true 00:09:05.170 } 00:09:05.170 } 00:09:05.170 ], 00:09:05.170 "mp_policy": "active_passive" 00:09:05.170 } 00:09:05.170 } 00:09:05.170 ] 00:09:05.170 09:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63255 00:09:05.170 09:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:05.170 09:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:05.429 Running I/O for 10 seconds... 00:09:06.366 Latency(us) 00:09:06.366 [2024-12-06T09:46:31.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.366 Nvme0n1 : 1.00 6264.00 24.47 0.00 0.00 0.00 0.00 0.00 00:09:06.366 [2024-12-06T09:46:31.638Z] =================================================================================================================== 00:09:06.366 [2024-12-06T09:46:31.638Z] Total : 6264.00 24.47 0.00 0.00 0.00 0.00 0.00 00:09:06.366 00:09:07.302 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:07.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.302 Nvme0n1 : 2.00 6180.00 24.14 0.00 0.00 0.00 0.00 0.00 00:09:07.302 [2024-12-06T09:46:32.574Z] =================================================================================================================== 00:09:07.302 [2024-12-06T09:46:32.574Z] Total : 6180.00 24.14 0.00 0.00 0.00 0.00 0.00 00:09:07.302 00:09:07.561 true 00:09:07.561 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:07.561 09:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:07.820 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:07.820 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:07.820 09:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63255 00:09:08.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.401 Nvme0n1 : 3.00 6157.00 24.05 0.00 0.00 0.00 0.00 0.00 00:09:08.401 [2024-12-06T09:46:33.673Z] =================================================================================================================== 00:09:08.401 [2024-12-06T09:46:33.673Z] Total : 6157.00 24.05 0.00 0.00 0.00 0.00 0.00 00:09:08.401 00:09:09.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.339 Nvme0n1 : 4.00 6173.50 24.12 0.00 0.00 0.00 0.00 0.00 00:09:09.339 [2024-12-06T09:46:34.611Z] =================================================================================================================== 00:09:09.339 [2024-12-06T09:46:34.611Z] Total : 6173.50 24.12 0.00 0.00 0.00 0.00 0.00 00:09:09.339 00:09:10.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.718 Nvme0n1 : 5.00 6208.80 24.25 0.00 0.00 0.00 0.00 0.00 00:09:10.718 [2024-12-06T09:46:35.990Z] =================================================================================================================== 00:09:10.718 [2024-12-06T09:46:35.990Z] Total : 6208.80 24.25 0.00 0.00 0.00 0.00 0.00 00:09:10.718 00:09:11.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.285 Nvme0n1 : 6.00 6232.33 24.35 0.00 0.00 0.00 0.00 0.00 00:09:11.285 [2024-12-06T09:46:36.557Z] =================================================================================================================== 00:09:11.285 [2024-12-06T09:46:36.558Z] Total : 6232.33 24.35 0.00 0.00 0.00 0.00 0.00 00:09:11.286 00:09:12.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.665 Nvme0n1 : 7.00 6212.86 24.27 0.00 0.00 0.00 0.00 0.00 00:09:12.665 [2024-12-06T09:46:37.937Z] =================================================================================================================== 00:09:12.665 [2024-12-06T09:46:37.937Z] Total : 6212.86 24.27 0.00 0.00 0.00 0.00 0.00 00:09:12.665 00:09:13.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.603 Nvme0n1 : 8.00 6214.12 24.27 0.00 0.00 0.00 0.00 0.00 00:09:13.603 [2024-12-06T09:46:38.875Z] =================================================================================================================== 00:09:13.603 [2024-12-06T09:46:38.875Z] Total : 6214.12 24.27 0.00 0.00 0.00 0.00 0.00 00:09:13.603 00:09:14.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.539 Nvme0n1 : 9.00 6215.11 24.28 0.00 0.00 0.00 0.00 0.00 00:09:14.539 [2024-12-06T09:46:39.811Z] =================================================================================================================== 00:09:14.539 [2024-12-06T09:46:39.811Z] Total : 6215.11 24.28 0.00 0.00 0.00 0.00 0.00 00:09:14.539 00:09:15.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.477 Nvme0n1 : 10.00 6203.20 24.23 0.00 0.00 0.00 0.00 0.00 00:09:15.477 [2024-12-06T09:46:40.749Z] =================================================================================================================== 00:09:15.477 [2024-12-06T09:46:40.749Z] Total : 6203.20 24.23 0.00 0.00 0.00 0.00 0.00 00:09:15.477 00:09:15.477 00:09:15.477 Latency(us) 00:09:15.477 [2024-12-06T09:46:40.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.477 Nvme0n1 : 10.00 6213.73 24.27 0.00 0.00 20593.20 7238.75 70063.94 00:09:15.477 [2024-12-06T09:46:40.749Z] =================================================================================================================== 00:09:15.477 [2024-12-06T09:46:40.749Z] Total : 6213.73 24.27 0.00 0.00 20593.20 7238.75 70063.94 00:09:15.477 { 00:09:15.477 "results": [ 00:09:15.477 { 00:09:15.477 "job": "Nvme0n1", 00:09:15.477 "core_mask": "0x2", 00:09:15.477 "workload": "randwrite", 00:09:15.477 "status": "finished", 00:09:15.477 "queue_depth": 128, 00:09:15.477 "io_size": 4096, 00:09:15.477 "runtime": 10.003649, 00:09:15.477 "iops": 6213.732608970987, 00:09:15.477 "mibps": 24.272393003792917, 00:09:15.477 "io_failed": 0, 00:09:15.477 "io_timeout": 0, 00:09:15.477 "avg_latency_us": 20593.202618930616, 00:09:15.477 "min_latency_us": 7238.749090909091, 00:09:15.477 "max_latency_us": 70063.94181818182 00:09:15.477 } 00:09:15.477 ], 00:09:15.477 "core_count": 1 00:09:15.477 } 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63226 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63226 ']' 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63226 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63226 00:09:15.477 killing process with pid 63226 00:09:15.477 Received shutdown signal, test time was about 10.000000 seconds 00:09:15.477 00:09:15.477 Latency(us) 00:09:15.477 [2024-12-06T09:46:40.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.477 [2024-12-06T09:46:40.749Z] =================================================================================================================== 00:09:15.477 [2024-12-06T09:46:40.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63226' 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63226 00:09:15.477 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63226 00:09:15.737 09:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:16.016 09:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:16.299 09:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:16.299 09:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:16.558 09:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:16.558 09:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:16.558 09:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.817 [2024-12-06 09:46:41.970152] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:16.817 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:17.077 request: 00:09:17.077 { 00:09:17.077 "uuid": "3a760c27-2614-4ae9-9eae-b0a8b3721d39", 00:09:17.077 "method": "bdev_lvol_get_lvstores", 00:09:17.077 "req_id": 1 00:09:17.077 } 00:09:17.077 Got JSON-RPC error response 00:09:17.077 response: 00:09:17.077 { 00:09:17.077 "code": -19, 00:09:17.077 "message": "No such device" 00:09:17.077 } 00:09:17.077 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:17.077 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:17.077 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:17.077 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:17.077 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.336 aio_bdev 00:09:17.336 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 40eb8d03-185b-4f3a-8b89-801eb5719c2e 00:09:17.336 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=40eb8d03-185b-4f3a-8b89-801eb5719c2e 00:09:17.336 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.336 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:17.336 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.336 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.336 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:17.596 09:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40eb8d03-185b-4f3a-8b89-801eb5719c2e -t 2000 00:09:17.855 [ 00:09:17.855 { 00:09:17.855 "name": "40eb8d03-185b-4f3a-8b89-801eb5719c2e", 00:09:17.855 "aliases": [ 00:09:17.855 "lvs/lvol" 00:09:17.855 ], 00:09:17.855 "product_name": "Logical Volume", 00:09:17.855 "block_size": 4096, 00:09:17.855 "num_blocks": 38912, 00:09:17.855 "uuid": "40eb8d03-185b-4f3a-8b89-801eb5719c2e", 00:09:17.855 "assigned_rate_limits": { 00:09:17.855 "rw_ios_per_sec": 0, 00:09:17.855 "rw_mbytes_per_sec": 0, 00:09:17.855 "r_mbytes_per_sec": 0, 00:09:17.855 "w_mbytes_per_sec": 0 00:09:17.855 }, 00:09:17.855 "claimed": false, 00:09:17.855 "zoned": false, 00:09:17.855 "supported_io_types": { 00:09:17.855 "read": true, 00:09:17.855 "write": true, 00:09:17.855 "unmap": true, 00:09:17.855 "flush": false, 00:09:17.855 "reset": true, 00:09:17.855 "nvme_admin": false, 00:09:17.855 "nvme_io": false, 00:09:17.855 "nvme_io_md": false, 00:09:17.855 "write_zeroes": true, 00:09:17.855 "zcopy": false, 00:09:17.855 "get_zone_info": false, 00:09:17.855 "zone_management": false, 00:09:17.855 "zone_append": false, 00:09:17.855 "compare": false, 00:09:17.855 "compare_and_write": false, 00:09:17.855 "abort": false, 00:09:17.855 "seek_hole": true, 00:09:17.855 "seek_data": true, 00:09:17.855 "copy": false, 00:09:17.855 "nvme_iov_md": false 00:09:17.855 }, 00:09:17.855 "driver_specific": { 00:09:17.855 "lvol": { 00:09:17.855 "lvol_store_uuid": "3a760c27-2614-4ae9-9eae-b0a8b3721d39", 00:09:17.855 "base_bdev": "aio_bdev", 00:09:17.855 "thin_provision": false, 00:09:17.855 "num_allocated_clusters": 38, 00:09:17.855 "snapshot": false, 00:09:17.855 "clone": false, 00:09:17.855 "esnap_clone": false 00:09:17.855 } 00:09:17.855 } 00:09:17.855 } 00:09:17.855 ] 00:09:17.855 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:17.855 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:17.855 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.115 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.115 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:18.115 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:18.375 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:18.375 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 40eb8d03-185b-4f3a-8b89-801eb5719c2e 00:09:18.946 09:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a760c27-2614-4ae9-9eae-b0a8b3721d39 00:09:19.205 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.464 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.724 ************************************ 00:09:19.724 END TEST lvs_grow_clean 00:09:19.724 ************************************ 00:09:19.724 00:09:19.724 real 0m19.001s 00:09:19.724 user 0m17.778s 00:09:19.724 sys 0m2.817s 00:09:19.724 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.724 09:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.983 ************************************ 00:09:19.983 START TEST lvs_grow_dirty 00:09:19.983 ************************************ 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.983 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.241 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:20.241 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:20.500 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:20.500 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:20.500 09:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:20.759 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:20.759 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:20.759 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb lvol 150 00:09:21.327 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3f90170e-5aca-4f01-83b7-989641a0a65f 00:09:21.327 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:21.327 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:21.327 [2024-12-06 09:46:46.515411] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:21.327 [2024-12-06 09:46:46.515504] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:21.327 true 00:09:21.327 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:21.327 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:21.895 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:21.896 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.896 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3f90170e-5aca-4f01-83b7-989641a0a65f 00:09:22.155 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:22.414 [2024-12-06 09:46:47.627938] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:22.414 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:22.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63509 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63509 /var/tmp/bdevperf.sock 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63509 ']' 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.674 09:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.934 [2024-12-06 09:46:47.972176] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:22.934 [2024-12-06 09:46:47.972984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63509 ] 00:09:22.934 [2024-12-06 09:46:48.121236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.193 [2024-12-06 09:46:48.206223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.193 [2024-12-06 09:46:48.277382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:23.759 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.759 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:23.759 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:24.017 Nvme0n1 00:09:24.017 09:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:24.276 [ 00:09:24.276 { 00:09:24.276 "name": "Nvme0n1", 00:09:24.276 "aliases": [ 00:09:24.276 "3f90170e-5aca-4f01-83b7-989641a0a65f" 00:09:24.276 ], 00:09:24.276 "product_name": "NVMe disk", 00:09:24.276 "block_size": 4096, 00:09:24.276 "num_blocks": 38912, 00:09:24.276 "uuid": "3f90170e-5aca-4f01-83b7-989641a0a65f", 00:09:24.276 "numa_id": -1, 00:09:24.276 "assigned_rate_limits": { 00:09:24.276 "rw_ios_per_sec": 0, 00:09:24.276 "rw_mbytes_per_sec": 0, 00:09:24.276 "r_mbytes_per_sec": 0, 00:09:24.276 "w_mbytes_per_sec": 0 00:09:24.276 }, 00:09:24.276 "claimed": false, 00:09:24.276 "zoned": false, 00:09:24.276 "supported_io_types": { 00:09:24.276 "read": true, 00:09:24.276 "write": true, 00:09:24.276 "unmap": true, 00:09:24.276 "flush": true, 00:09:24.276 "reset": true, 00:09:24.276 "nvme_admin": true, 00:09:24.276 "nvme_io": true, 00:09:24.276 "nvme_io_md": false, 00:09:24.276 "write_zeroes": true, 00:09:24.276 "zcopy": false, 00:09:24.276 "get_zone_info": false, 00:09:24.276 "zone_management": false, 00:09:24.276 "zone_append": false, 00:09:24.276 "compare": true, 00:09:24.276 "compare_and_write": true, 00:09:24.276 "abort": true, 00:09:24.276 "seek_hole": false, 00:09:24.276 "seek_data": false, 00:09:24.276 "copy": true, 00:09:24.276 "nvme_iov_md": false 00:09:24.276 }, 00:09:24.276 "memory_domains": [ 00:09:24.276 { 00:09:24.276 "dma_device_id": "system", 00:09:24.276 "dma_device_type": 1 00:09:24.276 } 00:09:24.276 ], 00:09:24.276 "driver_specific": { 00:09:24.276 "nvme": [ 00:09:24.276 { 00:09:24.276 "trid": { 00:09:24.276 "trtype": "TCP", 00:09:24.276 "adrfam": "IPv4", 00:09:24.276 "traddr": "10.0.0.3", 00:09:24.276 "trsvcid": "4420", 00:09:24.276 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:24.276 }, 00:09:24.276 "ctrlr_data": { 00:09:24.276 "cntlid": 1, 00:09:24.276 "vendor_id": "0x8086", 00:09:24.276 "model_number": "SPDK bdev Controller", 00:09:24.276 "serial_number": "SPDK0", 00:09:24.276 "firmware_revision": "25.01", 00:09:24.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:24.276 "oacs": { 00:09:24.276 "security": 0, 00:09:24.276 "format": 0, 00:09:24.276 "firmware": 0, 00:09:24.276 "ns_manage": 0 00:09:24.276 }, 00:09:24.276 "multi_ctrlr": true, 00:09:24.276 "ana_reporting": false 00:09:24.276 }, 00:09:24.276 "vs": { 00:09:24.276 "nvme_version": "1.3" 00:09:24.276 }, 00:09:24.276 "ns_data": { 00:09:24.276 "id": 1, 00:09:24.276 "can_share": true 00:09:24.276 } 00:09:24.276 } 00:09:24.276 ], 00:09:24.276 "mp_policy": "active_passive" 00:09:24.276 } 00:09:24.276 } 00:09:24.276 ] 00:09:24.276 09:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63538 00:09:24.276 09:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:24.276 09:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:24.536 Running I/O for 10 seconds... 00:09:25.484 Latency(us) 00:09:25.484 [2024-12-06T09:46:50.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.484 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:25.484 [2024-12-06T09:46:50.756Z] =================================================================================================================== 00:09:25.484 [2024-12-06T09:46:50.756Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:25.484 00:09:26.419 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:26.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.419 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:26.419 [2024-12-06T09:46:51.691Z] =================================================================================================================== 00:09:26.419 [2024-12-06T09:46:51.691Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:26.419 00:09:26.678 true 00:09:26.678 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:26.678 09:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:26.936 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:26.936 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:26.936 09:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63538 00:09:27.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.504 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:09:27.504 [2024-12-06T09:46:52.776Z] =================================================================================================================== 00:09:27.504 [2024-12-06T09:46:52.776Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:09:27.504 00:09:28.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.463 Nvme0n1 : 4.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:28.463 [2024-12-06T09:46:53.735Z] =================================================================================================================== 00:09:28.463 [2024-12-06T09:46:53.735Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:28.463 00:09:29.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.406 Nvme0n1 : 5.00 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:09:29.406 [2024-12-06T09:46:54.678Z] =================================================================================================================== 00:09:29.406 [2024-12-06T09:46:54.678Z] Total : 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:09:29.406 00:09:30.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.784 Nvme0n1 : 6.00 6539.83 25.55 0.00 0.00 0.00 0.00 0.00 00:09:30.784 [2024-12-06T09:46:56.056Z] =================================================================================================================== 00:09:30.784 [2024-12-06T09:46:56.056Z] Total : 6539.83 25.55 0.00 0.00 0.00 0.00 0.00 00:09:30.784 00:09:31.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.720 Nvme0n1 : 7.00 6530.86 25.51 0.00 0.00 0.00 0.00 0.00 00:09:31.720 [2024-12-06T09:46:56.992Z] =================================================================================================================== 00:09:31.720 [2024-12-06T09:46:56.992Z] Total : 6530.86 25.51 0.00 0.00 0.00 0.00 0.00 00:09:31.720 00:09:32.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.657 Nvme0n1 : 8.00 6524.12 25.48 0.00 0.00 0.00 0.00 0.00 00:09:32.657 [2024-12-06T09:46:57.929Z] =================================================================================================================== 00:09:32.657 [2024-12-06T09:46:57.929Z] Total : 6524.12 25.48 0.00 0.00 0.00 0.00 0.00 00:09:32.657 00:09:33.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.595 Nvme0n1 : 9.00 6533.00 25.52 0.00 0.00 0.00 0.00 0.00 00:09:33.595 [2024-12-06T09:46:58.867Z] =================================================================================================================== 00:09:33.595 [2024-12-06T09:46:58.867Z] Total : 6533.00 25.52 0.00 0.00 0.00 0.00 0.00 00:09:33.595 00:09:34.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.532 Nvme0n1 : 10.00 6527.40 25.50 0.00 0.00 0.00 0.00 0.00 00:09:34.532 [2024-12-06T09:46:59.804Z] =================================================================================================================== 00:09:34.532 [2024-12-06T09:46:59.804Z] Total : 6527.40 25.50 0.00 0.00 0.00 0.00 0.00 00:09:34.532 00:09:34.532 00:09:34.532 Latency(us) 00:09:34.532 [2024-12-06T09:46:59.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.532 Nvme0n1 : 10.01 6536.20 25.53 0.00 0.00 19577.80 6017.40 50045.67 00:09:34.532 [2024-12-06T09:46:59.804Z] =================================================================================================================== 00:09:34.532 [2024-12-06T09:46:59.804Z] Total : 6536.20 25.53 0.00 0.00 19577.80 6017.40 50045.67 00:09:34.532 { 00:09:34.532 "results": [ 00:09:34.532 { 00:09:34.532 "job": "Nvme0n1", 00:09:34.532 "core_mask": "0x2", 00:09:34.532 "workload": "randwrite", 00:09:34.532 "status": "finished", 00:09:34.532 "queue_depth": 128, 00:09:34.532 "io_size": 4096, 00:09:34.532 "runtime": 10.006115, 00:09:34.532 "iops": 6536.203111797136, 00:09:34.532 "mibps": 25.532043405457564, 00:09:34.532 "io_failed": 0, 00:09:34.532 "io_timeout": 0, 00:09:34.532 "avg_latency_us": 19577.7959653166, 00:09:34.532 "min_latency_us": 6017.396363636363, 00:09:34.532 "max_latency_us": 50045.67272727273 00:09:34.532 } 00:09:34.532 ], 00:09:34.532 "core_count": 1 00:09:34.532 } 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63509 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63509 ']' 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63509 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63509 00:09:34.532 killing process with pid 63509 00:09:34.532 Received shutdown signal, test time was about 10.000000 seconds 00:09:34.532 00:09:34.532 Latency(us) 00:09:34.532 [2024-12-06T09:46:59.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.532 [2024-12-06T09:46:59.804Z] =================================================================================================================== 00:09:34.532 [2024-12-06T09:46:59.804Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63509' 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63509 00:09:34.532 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63509 00:09:34.791 09:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:35.050 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:35.310 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:35.310 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:35.569 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:35.569 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:35.569 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63138 00:09:35.569 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63138 00:09:35.828 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63138 Killed "${NVMF_APP[@]}" "$@" 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63671 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63671 00:09:35.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63671 ']' 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.828 09:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.829 [2024-12-06 09:47:00.919341] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:35.829 [2024-12-06 09:47:00.919767] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.829 [2024-12-06 09:47:01.063241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.087 [2024-12-06 09:47:01.111925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.088 [2024-12-06 09:47:01.111982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.088 [2024-12-06 09:47:01.112010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.088 [2024-12-06 09:47:01.112018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.088 [2024-12-06 09:47:01.112024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.088 [2024-12-06 09:47:01.112378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.088 [2024-12-06 09:47:01.165765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.088 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.088 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:36.088 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.088 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.088 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:36.088 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.088 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.348 [2024-12-06 09:47:01.525881] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:36.348 [2024-12-06 09:47:01.526308] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:36.348 [2024-12-06 09:47:01.526613] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:36.348 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:36.348 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3f90170e-5aca-4f01-83b7-989641a0a65f 00:09:36.348 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3f90170e-5aca-4f01-83b7-989641a0a65f 00:09:36.348 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.348 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:36.348 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.348 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.348 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.607 09:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f90170e-5aca-4f01-83b7-989641a0a65f -t 2000 00:09:36.867 [ 00:09:36.867 { 00:09:36.867 "name": "3f90170e-5aca-4f01-83b7-989641a0a65f", 00:09:36.867 "aliases": [ 00:09:36.867 "lvs/lvol" 00:09:36.867 ], 00:09:36.867 "product_name": "Logical Volume", 00:09:36.867 "block_size": 4096, 00:09:36.867 "num_blocks": 38912, 00:09:36.867 "uuid": "3f90170e-5aca-4f01-83b7-989641a0a65f", 00:09:36.867 "assigned_rate_limits": { 00:09:36.867 "rw_ios_per_sec": 0, 00:09:36.867 "rw_mbytes_per_sec": 0, 00:09:36.867 "r_mbytes_per_sec": 0, 00:09:36.867 "w_mbytes_per_sec": 0 00:09:36.867 }, 00:09:36.867 "claimed": false, 00:09:36.867 "zoned": false, 00:09:36.867 "supported_io_types": { 00:09:36.867 "read": true, 00:09:36.867 "write": true, 00:09:36.867 "unmap": true, 00:09:36.867 "flush": false, 00:09:36.867 "reset": true, 00:09:36.867 "nvme_admin": false, 00:09:36.867 "nvme_io": false, 00:09:36.867 "nvme_io_md": false, 00:09:36.867 "write_zeroes": true, 00:09:36.867 "zcopy": false, 00:09:36.867 "get_zone_info": false, 00:09:36.867 "zone_management": false, 00:09:36.867 "zone_append": false, 00:09:36.867 "compare": false, 00:09:36.867 "compare_and_write": false, 00:09:36.867 "abort": false, 00:09:36.867 "seek_hole": true, 00:09:36.867 "seek_data": true, 00:09:36.867 "copy": false, 00:09:36.867 "nvme_iov_md": false 00:09:36.867 }, 00:09:36.867 "driver_specific": { 00:09:36.867 "lvol": { 00:09:36.867 "lvol_store_uuid": "b64c90e3-c73a-4fbb-87dd-c53e78147dcb", 00:09:36.867 "base_bdev": "aio_bdev", 00:09:36.867 "thin_provision": false, 00:09:36.867 "num_allocated_clusters": 38, 00:09:36.867 "snapshot": false, 00:09:36.867 "clone": false, 00:09:36.867 "esnap_clone": false 00:09:36.867 } 00:09:36.867 } 00:09:36.867 } 00:09:36.867 ] 00:09:36.867 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:36.867 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:36.867 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:37.126 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:37.126 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:37.126 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:37.386 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:37.386 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:37.646 [2024-12-06 09:47:02.891670] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:37.905 09:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:37.905 request: 00:09:37.905 { 00:09:37.906 "uuid": "b64c90e3-c73a-4fbb-87dd-c53e78147dcb", 00:09:37.906 "method": "bdev_lvol_get_lvstores", 00:09:37.906 "req_id": 1 00:09:37.906 } 00:09:37.906 Got JSON-RPC error response 00:09:37.906 response: 00:09:37.906 { 00:09:37.906 "code": -19, 00:09:37.906 "message": "No such device" 00:09:37.906 } 00:09:38.165 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:38.165 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.165 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:38.165 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.165 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:38.424 aio_bdev 00:09:38.424 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3f90170e-5aca-4f01-83b7-989641a0a65f 00:09:38.424 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3f90170e-5aca-4f01-83b7-989641a0a65f 00:09:38.424 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.424 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:38.424 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.424 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.424 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:38.683 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f90170e-5aca-4f01-83b7-989641a0a65f -t 2000 00:09:38.943 [ 00:09:38.943 { 00:09:38.943 "name": "3f90170e-5aca-4f01-83b7-989641a0a65f", 00:09:38.943 "aliases": [ 00:09:38.943 "lvs/lvol" 00:09:38.943 ], 00:09:38.943 "product_name": "Logical Volume", 00:09:38.943 "block_size": 4096, 00:09:38.943 "num_blocks": 38912, 00:09:38.943 "uuid": "3f90170e-5aca-4f01-83b7-989641a0a65f", 00:09:38.943 "assigned_rate_limits": { 00:09:38.943 "rw_ios_per_sec": 0, 00:09:38.943 "rw_mbytes_per_sec": 0, 00:09:38.943 "r_mbytes_per_sec": 0, 00:09:38.943 "w_mbytes_per_sec": 0 00:09:38.943 }, 00:09:38.943 "claimed": false, 00:09:38.943 "zoned": false, 00:09:38.943 "supported_io_types": { 00:09:38.943 "read": true, 00:09:38.943 "write": true, 00:09:38.943 "unmap": true, 00:09:38.943 "flush": false, 00:09:38.943 "reset": true, 00:09:38.943 "nvme_admin": false, 00:09:38.943 "nvme_io": false, 00:09:38.943 "nvme_io_md": false, 00:09:38.943 "write_zeroes": true, 00:09:38.943 "zcopy": false, 00:09:38.943 "get_zone_info": false, 00:09:38.943 "zone_management": false, 00:09:38.943 "zone_append": false, 00:09:38.943 "compare": false, 00:09:38.943 "compare_and_write": false, 00:09:38.943 "abort": false, 00:09:38.943 "seek_hole": true, 00:09:38.943 "seek_data": true, 00:09:38.943 "copy": false, 00:09:38.943 "nvme_iov_md": false 00:09:38.943 }, 00:09:38.943 "driver_specific": { 00:09:38.943 "lvol": { 00:09:38.943 "lvol_store_uuid": "b64c90e3-c73a-4fbb-87dd-c53e78147dcb", 00:09:38.943 "base_bdev": "aio_bdev", 00:09:38.943 "thin_provision": false, 00:09:38.943 "num_allocated_clusters": 38, 00:09:38.943 "snapshot": false, 00:09:38.943 "clone": false, 00:09:38.943 "esnap_clone": false 00:09:38.943 } 00:09:38.943 } 00:09:38.943 } 00:09:38.943 ] 00:09:38.943 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:38.943 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:38.943 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:39.203 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:39.203 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:39.203 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:39.462 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:39.462 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3f90170e-5aca-4f01-83b7-989641a0a65f 00:09:39.721 09:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b64c90e3-c73a-4fbb-87dd-c53e78147dcb 00:09:39.980 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:40.238 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:40.496 ************************************ 00:09:40.496 END TEST lvs_grow_dirty 00:09:40.496 ************************************ 00:09:40.496 00:09:40.496 real 0m20.700s 00:09:40.496 user 0m43.201s 00:09:40.496 sys 0m9.563s 00:09:40.496 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.496 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:40.755 nvmf_trace.0 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.755 09:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.016 rmmod nvme_tcp 00:09:41.016 rmmod nvme_fabrics 00:09:41.016 rmmod nvme_keyring 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63671 ']' 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63671 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63671 ']' 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63671 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63671 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.016 killing process with pid 63671 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63671' 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63671 00:09:41.016 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63671 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:41.275 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:41.534 00:09:41.534 real 0m42.824s 00:09:41.534 user 1m7.345s 00:09:41.534 sys 0m13.394s 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.534 ************************************ 00:09:41.534 END TEST nvmf_lvs_grow 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.534 ************************************ 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.534 ************************************ 00:09:41.534 START TEST nvmf_bdev_io_wait 00:09:41.534 ************************************ 00:09:41.534 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.794 * Looking for test storage... 00:09:41.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:41.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.794 --rc genhtml_branch_coverage=1 00:09:41.794 --rc genhtml_function_coverage=1 00:09:41.794 --rc genhtml_legend=1 00:09:41.794 --rc geninfo_all_blocks=1 00:09:41.794 --rc geninfo_unexecuted_blocks=1 00:09:41.794 00:09:41.794 ' 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:41.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.794 --rc genhtml_branch_coverage=1 00:09:41.794 --rc genhtml_function_coverage=1 00:09:41.794 --rc genhtml_legend=1 00:09:41.794 --rc geninfo_all_blocks=1 00:09:41.794 --rc geninfo_unexecuted_blocks=1 00:09:41.794 00:09:41.794 ' 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:41.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.794 --rc genhtml_branch_coverage=1 00:09:41.794 --rc genhtml_function_coverage=1 00:09:41.794 --rc genhtml_legend=1 00:09:41.794 --rc geninfo_all_blocks=1 00:09:41.794 --rc geninfo_unexecuted_blocks=1 00:09:41.794 00:09:41.794 ' 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:41.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.794 --rc genhtml_branch_coverage=1 00:09:41.794 --rc genhtml_function_coverage=1 00:09:41.794 --rc genhtml_legend=1 00:09:41.794 --rc geninfo_all_blocks=1 00:09:41.794 --rc geninfo_unexecuted_blocks=1 00:09:41.794 00:09:41.794 ' 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.794 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.795 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.795 09:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:41.795 Cannot find device "nvmf_init_br" 00:09:41.795 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:41.795 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:41.795 Cannot find device "nvmf_init_br2" 00:09:41.795 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:41.795 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:41.795 Cannot find device "nvmf_tgt_br" 00:09:41.795 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:41.795 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.795 Cannot find device "nvmf_tgt_br2" 00:09:41.795 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:41.795 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:41.795 Cannot find device "nvmf_init_br" 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:42.055 Cannot find device "nvmf_init_br2" 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:42.055 Cannot find device "nvmf_tgt_br" 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:42.055 Cannot find device "nvmf_tgt_br2" 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:42.055 Cannot find device "nvmf_br" 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:42.055 Cannot find device "nvmf_init_if" 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:42.055 Cannot find device "nvmf_init_if2" 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:42.055 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:42.056 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:42.315 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:42.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:42.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:42.316 00:09:42.316 --- 10.0.0.3 ping statistics --- 00:09:42.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.316 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:42.316 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:42.316 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:09:42.316 00:09:42.316 --- 10.0.0.4 ping statistics --- 00:09:42.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.316 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:42.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:09:42.316 00:09:42.316 --- 10.0.0.1 ping statistics --- 00:09:42.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.316 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:42.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:09:42.316 00:09:42.316 --- 10.0.0.2 ping statistics --- 00:09:42.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.316 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64031 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64031 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64031 ']' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.316 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.316 [2024-12-06 09:47:07.529470] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:42.316 [2024-12-06 09:47:07.529835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.577 [2024-12-06 09:47:07.677826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.577 [2024-12-06 09:47:07.739616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.577 [2024-12-06 09:47:07.739926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.577 [2024-12-06 09:47:07.740109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.577 [2024-12-06 09:47:07.740161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.577 [2024-12-06 09:47:07.740259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.577 [2024-12-06 09:47:07.741419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.577 [2024-12-06 09:47:07.741519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.577 [2024-12-06 09:47:07.742251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.577 [2024-12-06 09:47:07.742292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.577 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 [2024-12-06 09:47:07.909451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 [2024-12-06 09:47:07.925826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 Malloc0 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.838 [2024-12-06 09:47:07.982045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64064 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64066 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.838 { 00:09:42.838 "params": { 00:09:42.838 "name": "Nvme$subsystem", 00:09:42.838 "trtype": "$TEST_TRANSPORT", 00:09:42.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.838 "adrfam": "ipv4", 00:09:42.838 "trsvcid": "$NVMF_PORT", 00:09:42.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.838 "hdgst": ${hdgst:-false}, 00:09:42.838 "ddgst": ${ddgst:-false} 00:09:42.838 }, 00:09:42.838 "method": "bdev_nvme_attach_controller" 00:09:42.838 } 00:09:42.838 EOF 00:09:42.838 )") 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64068 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.838 { 00:09:42.838 "params": { 00:09:42.838 "name": "Nvme$subsystem", 00:09:42.838 "trtype": "$TEST_TRANSPORT", 00:09:42.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.838 "adrfam": "ipv4", 00:09:42.838 "trsvcid": "$NVMF_PORT", 00:09:42.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.838 "hdgst": ${hdgst:-false}, 00:09:42.838 "ddgst": ${ddgst:-false} 00:09:42.838 }, 00:09:42.838 "method": "bdev_nvme_attach_controller" 00:09:42.838 } 00:09:42.838 EOF 00:09:42.838 )") 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.838 { 00:09:42.838 "params": { 00:09:42.838 "name": "Nvme$subsystem", 00:09:42.838 "trtype": "$TEST_TRANSPORT", 00:09:42.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.838 "adrfam": "ipv4", 00:09:42.838 "trsvcid": "$NVMF_PORT", 00:09:42.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.838 "hdgst": ${hdgst:-false}, 00:09:42.838 "ddgst": ${ddgst:-false} 00:09:42.838 }, 00:09:42.838 "method": "bdev_nvme_attach_controller" 00:09:42.838 } 00:09:42.838 EOF 00:09:42.838 )") 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64071 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.838 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.838 { 00:09:42.838 "params": { 00:09:42.838 "name": "Nvme$subsystem", 00:09:42.838 "trtype": "$TEST_TRANSPORT", 00:09:42.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.838 "adrfam": "ipv4", 00:09:42.838 "trsvcid": "$NVMF_PORT", 00:09:42.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.838 "hdgst": ${hdgst:-false}, 00:09:42.838 "ddgst": ${ddgst:-false} 00:09:42.839 }, 00:09:42.839 "method": "bdev_nvme_attach_controller" 00:09:42.839 } 00:09:42.839 EOF 00:09:42.839 )") 00:09:42.839 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:42.839 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:42.839 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.839 "params": { 00:09:42.839 "name": "Nvme1", 00:09:42.839 "trtype": "tcp", 00:09:42.839 "traddr": "10.0.0.3", 00:09:42.839 "adrfam": "ipv4", 00:09:42.839 "trsvcid": "4420", 00:09:42.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.839 "hdgst": false, 00:09:42.839 "ddgst": false 00:09:42.839 }, 00:09:42.839 "method": "bdev_nvme_attach_controller" 00:09:42.839 }' 00:09:42.839 09:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.839 "params": { 00:09:42.839 "name": "Nvme1", 00:09:42.839 "trtype": "tcp", 00:09:42.839 "traddr": "10.0.0.3", 00:09:42.839 "adrfam": "ipv4", 00:09:42.839 "trsvcid": "4420", 00:09:42.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.839 "hdgst": false, 00:09:42.839 "ddgst": false 00:09:42.839 }, 00:09:42.839 "method": "bdev_nvme_attach_controller" 00:09:42.839 }' 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.839 "params": { 00:09:42.839 "name": "Nvme1", 00:09:42.839 "trtype": "tcp", 00:09:42.839 "traddr": "10.0.0.3", 00:09:42.839 "adrfam": "ipv4", 00:09:42.839 "trsvcid": "4420", 00:09:42.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.839 "hdgst": false, 00:09:42.839 "ddgst": false 00:09:42.839 }, 00:09:42.839 "method": "bdev_nvme_attach_controller" 00:09:42.839 }' 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.839 "params": { 00:09:42.839 "name": "Nvme1", 00:09:42.839 "trtype": "tcp", 00:09:42.839 "traddr": "10.0.0.3", 00:09:42.839 "adrfam": "ipv4", 00:09:42.839 "trsvcid": "4420", 00:09:42.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.839 "hdgst": false, 00:09:42.839 "ddgst": false 00:09:42.839 }, 00:09:42.839 "method": "bdev_nvme_attach_controller" 00:09:42.839 }' 00:09:42.839 [2024-12-06 09:47:08.050715] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:42.839 [2024-12-06 09:47:08.051549] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:42.839 [2024-12-06 09:47:08.056562] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:42.839 [2024-12-06 09:47:08.056837] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:42.839 09:47:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64064 00:09:42.839 [2024-12-06 09:47:08.066186] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:42.839 [2024-12-06 09:47:08.066412] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:42.839 [2024-12-06 09:47:08.093337] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:42.839 [2024-12-06 09:47:08.093737] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:43.098 [2024-12-06 09:47:08.275704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.098 [2024-12-06 09:47:08.333525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:43.098 [2024-12-06 09:47:08.347768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.098 [2024-12-06 09:47:08.352819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.358 [2024-12-06 09:47:08.410457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:43.358 [2024-12-06 09:47:08.421597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.358 [2024-12-06 09:47:08.424706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.359 [2024-12-06 09:47:08.489640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.359 [2024-12-06 09:47:08.503759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.359 [2024-12-06 09:47:08.524554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.359 Running I/O for 1 seconds... 00:09:43.359 Running I/O for 1 seconds... 00:09:43.359 [2024-12-06 09:47:08.580705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:43.359 [2024-12-06 09:47:08.594804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.618 Running I/O for 1 seconds... 00:09:43.618 Running I/O for 1 seconds... 00:09:44.554 167840.00 IOPS, 655.62 MiB/s 00:09:44.554 Latency(us) 00:09:44.554 [2024-12-06T09:47:09.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.554 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:44.554 Nvme1n1 : 1.00 167435.44 654.04 0.00 0.00 760.21 361.19 2383.13 00:09:44.554 [2024-12-06T09:47:09.826Z] =================================================================================================================== 00:09:44.554 [2024-12-06T09:47:09.826Z] Total : 167435.44 654.04 0.00 0.00 760.21 361.19 2383.13 00:09:44.554 4780.00 IOPS, 18.67 MiB/s 00:09:44.554 Latency(us) 00:09:44.554 [2024-12-06T09:47:09.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.554 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:44.554 Nvme1n1 : 1.04 4732.66 18.49 0.00 0.00 26567.77 8102.63 46470.98 00:09:44.554 [2024-12-06T09:47:09.826Z] =================================================================================================================== 00:09:44.554 [2024-12-06T09:47:09.826Z] Total : 4732.66 18.49 0.00 0.00 26567.77 8102.63 46470.98 00:09:44.554 4434.00 IOPS, 17.32 MiB/s 00:09:44.554 Latency(us) 00:09:44.554 [2024-12-06T09:47:09.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.554 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:44.554 Nvme1n1 : 1.01 4538.06 17.73 0.00 0.00 28075.81 8162.21 46947.61 00:09:44.554 [2024-12-06T09:47:09.826Z] =================================================================================================================== 00:09:44.554 [2024-12-06T09:47:09.826Z] Total : 4538.06 17.73 0.00 0.00 28075.81 8162.21 46947.61 00:09:44.554 5839.00 IOPS, 22.81 MiB/s 00:09:44.554 Latency(us) 00:09:44.554 [2024-12-06T09:47:09.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.554 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:44.554 Nvme1n1 : 1.01 5917.17 23.11 0.00 0.00 21510.40 9294.20 34555.35 00:09:44.554 [2024-12-06T09:47:09.826Z] =================================================================================================================== 00:09:44.554 [2024-12-06T09:47:09.826Z] Total : 5917.17 23.11 0.00 0.00 21510.40 9294.20 34555.35 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64066 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64068 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64071 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.813 rmmod nvme_tcp 00:09:44.813 rmmod nvme_fabrics 00:09:44.813 rmmod nvme_keyring 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64031 ']' 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64031 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64031 ']' 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64031 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.813 09:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64031 00:09:44.813 killing process with pid 64031 00:09:44.813 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.813 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.813 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64031' 00:09:44.813 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64031 00:09:44.813 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64031 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:45.072 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:45.331 00:09:45.331 real 0m3.682s 00:09:45.331 user 0m14.715s 00:09:45.331 sys 0m2.168s 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.331 ************************************ 00:09:45.331 END TEST nvmf_bdev_io_wait 00:09:45.331 ************************************ 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.331 ************************************ 00:09:45.331 START TEST nvmf_queue_depth 00:09:45.331 ************************************ 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:45.331 * Looking for test storage... 00:09:45.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.331 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.591 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.592 --rc genhtml_branch_coverage=1 00:09:45.592 --rc genhtml_function_coverage=1 00:09:45.592 --rc genhtml_legend=1 00:09:45.592 --rc geninfo_all_blocks=1 00:09:45.592 --rc geninfo_unexecuted_blocks=1 00:09:45.592 00:09:45.592 ' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.592 --rc genhtml_branch_coverage=1 00:09:45.592 --rc genhtml_function_coverage=1 00:09:45.592 --rc genhtml_legend=1 00:09:45.592 --rc geninfo_all_blocks=1 00:09:45.592 --rc geninfo_unexecuted_blocks=1 00:09:45.592 00:09:45.592 ' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.592 --rc genhtml_branch_coverage=1 00:09:45.592 --rc genhtml_function_coverage=1 00:09:45.592 --rc genhtml_legend=1 00:09:45.592 --rc geninfo_all_blocks=1 00:09:45.592 --rc geninfo_unexecuted_blocks=1 00:09:45.592 00:09:45.592 ' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.592 --rc genhtml_branch_coverage=1 00:09:45.592 --rc genhtml_function_coverage=1 00:09:45.592 --rc genhtml_legend=1 00:09:45.592 --rc geninfo_all_blocks=1 00:09:45.592 --rc geninfo_unexecuted_blocks=1 00:09:45.592 00:09:45.592 ' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.592 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.592 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:45.593 Cannot find device "nvmf_init_br" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:45.593 Cannot find device "nvmf_init_br2" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:45.593 Cannot find device "nvmf_tgt_br" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.593 Cannot find device "nvmf_tgt_br2" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:45.593 Cannot find device "nvmf_init_br" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:45.593 Cannot find device "nvmf_init_br2" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:45.593 Cannot find device "nvmf_tgt_br" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:45.593 Cannot find device "nvmf_tgt_br2" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:45.593 Cannot find device "nvmf_br" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:45.593 Cannot find device "nvmf_init_if" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:45.593 Cannot find device "nvmf_init_if2" 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:45.593 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.853 09:47:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.853 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:46.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:46.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:09:46.113 00:09:46.113 --- 10.0.0.3 ping statistics --- 00:09:46.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.113 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:46.113 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:46.113 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:09:46.113 00:09:46.113 --- 10.0.0.4 ping statistics --- 00:09:46.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.113 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:46.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:46.113 00:09:46.113 --- 10.0.0.1 ping statistics --- 00:09:46.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.113 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:46.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:46.113 00:09:46.113 --- 10.0.0.2 ping statistics --- 00:09:46.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.113 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64329 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64329 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64329 ']' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.113 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.113 [2024-12-06 09:47:11.239339] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:46.113 [2024-12-06 09:47:11.239419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.371 [2024-12-06 09:47:11.389471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.371 [2024-12-06 09:47:11.466093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.371 [2024-12-06 09:47:11.466158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.371 [2024-12-06 09:47:11.466169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.371 [2024-12-06 09:47:11.466178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.371 [2024-12-06 09:47:11.466185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.371 [2024-12-06 09:47:11.466715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.371 [2024-12-06 09:47:11.537431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.371 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.372 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:46.372 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.372 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.372 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.630 [2024-12-06 09:47:11.669765] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.630 Malloc0 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.630 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.631 [2024-12-06 09:47:11.726993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64348 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64348 /var/tmp/bdevperf.sock 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64348 ']' 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.631 09:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.631 [2024-12-06 09:47:11.795286] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:46.631 [2024-12-06 09:47:11.795398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64348 ] 00:09:46.890 [2024-12-06 09:47:11.950470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.890 [2024-12-06 09:47:12.012758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.890 [2024-12-06 09:47:12.071775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.890 09:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.890 09:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:46.890 09:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:46.890 09:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.890 09:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.149 NVMe0n1 00:09:47.149 09:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.149 09:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.149 Running I/O for 10 seconds... 00:09:49.493 6590.00 IOPS, 25.74 MiB/s [2024-12-06T09:47:15.332Z] 7171.50 IOPS, 28.01 MiB/s [2024-12-06T09:47:16.708Z] 7466.67 IOPS, 29.17 MiB/s [2024-12-06T09:47:17.643Z] 7592.25 IOPS, 29.66 MiB/s [2024-12-06T09:47:18.581Z] 7713.40 IOPS, 30.13 MiB/s [2024-12-06T09:47:19.519Z] 7798.00 IOPS, 30.46 MiB/s [2024-12-06T09:47:20.455Z] 7858.14 IOPS, 30.70 MiB/s [2024-12-06T09:47:21.391Z] 7925.12 IOPS, 30.96 MiB/s [2024-12-06T09:47:22.768Z] 7990.78 IOPS, 31.21 MiB/s [2024-12-06T09:47:22.768Z] 8122.60 IOPS, 31.73 MiB/s 00:09:57.496 Latency(us) 00:09:57.496 [2024-12-06T09:47:22.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.496 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:57.496 Verification LBA range: start 0x0 length 0x4000 00:09:57.496 NVMe0n1 : 10.07 8157.91 31.87 0.00 0.00 125007.56 18826.71 94371.84 00:09:57.496 [2024-12-06T09:47:22.768Z] =================================================================================================================== 00:09:57.496 [2024-12-06T09:47:22.769Z] Total : 8157.91 31.87 0.00 0.00 125007.56 18826.71 94371.84 00:09:57.497 { 00:09:57.497 "results": [ 00:09:57.497 { 00:09:57.497 "job": "NVMe0n1", 00:09:57.497 "core_mask": "0x1", 00:09:57.497 "workload": "verify", 00:09:57.497 "status": "finished", 00:09:57.497 "verify_range": { 00:09:57.497 "start": 0, 00:09:57.497 "length": 16384 00:09:57.497 }, 00:09:57.497 "queue_depth": 1024, 00:09:57.497 "io_size": 4096, 00:09:57.497 "runtime": 10.068516, 00:09:57.497 "iops": 8157.905296073423, 00:09:57.497 "mibps": 31.86681756278681, 00:09:57.497 "io_failed": 0, 00:09:57.497 "io_timeout": 0, 00:09:57.497 "avg_latency_us": 125007.55719682398, 00:09:57.497 "min_latency_us": 18826.705454545456, 00:09:57.497 "max_latency_us": 94371.84 00:09:57.497 } 00:09:57.497 ], 00:09:57.497 "core_count": 1 00:09:57.497 } 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64348 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64348 ']' 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64348 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64348 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.497 killing process with pid 64348 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64348' 00:09:57.497 Received shutdown signal, test time was about 10.000000 seconds 00:09:57.497 00:09:57.497 Latency(us) 00:09:57.497 [2024-12-06T09:47:22.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.497 [2024-12-06T09:47:22.769Z] =================================================================================================================== 00:09:57.497 [2024-12-06T09:47:22.769Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64348 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64348 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.497 rmmod nvme_tcp 00:09:57.497 rmmod nvme_fabrics 00:09:57.497 rmmod nvme_keyring 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64329 ']' 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64329 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64329 ']' 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64329 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.497 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64329 00:09:57.756 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:57.756 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:57.757 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64329' 00:09:57.757 killing process with pid 64329 00:09:57.757 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64329 00:09:57.757 09:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64329 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:58.016 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.017 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.017 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:58.277 00:09:58.277 real 0m12.790s 00:09:58.277 user 0m21.149s 00:09:58.277 sys 0m2.585s 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.277 ************************************ 00:09:58.277 END TEST nvmf_queue_depth 00:09:58.277 ************************************ 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.277 ************************************ 00:09:58.277 START TEST nvmf_target_multipath 00:09:58.277 ************************************ 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:58.277 * Looking for test storage... 00:09:58.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.277 --rc genhtml_branch_coverage=1 00:09:58.277 --rc genhtml_function_coverage=1 00:09:58.277 --rc genhtml_legend=1 00:09:58.277 --rc geninfo_all_blocks=1 00:09:58.277 --rc geninfo_unexecuted_blocks=1 00:09:58.277 00:09:58.277 ' 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.277 --rc genhtml_branch_coverage=1 00:09:58.277 --rc genhtml_function_coverage=1 00:09:58.277 --rc genhtml_legend=1 00:09:58.277 --rc geninfo_all_blocks=1 00:09:58.277 --rc geninfo_unexecuted_blocks=1 00:09:58.277 00:09:58.277 ' 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.277 --rc genhtml_branch_coverage=1 00:09:58.277 --rc genhtml_function_coverage=1 00:09:58.277 --rc genhtml_legend=1 00:09:58.277 --rc geninfo_all_blocks=1 00:09:58.277 --rc geninfo_unexecuted_blocks=1 00:09:58.277 00:09:58.277 ' 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.277 --rc genhtml_branch_coverage=1 00:09:58.277 --rc genhtml_function_coverage=1 00:09:58.277 --rc genhtml_legend=1 00:09:58.277 --rc geninfo_all_blocks=1 00:09:58.277 --rc geninfo_unexecuted_blocks=1 00:09:58.277 00:09:58.277 ' 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.277 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.539 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:58.539 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:58.540 Cannot find device "nvmf_init_br" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:58.540 Cannot find device "nvmf_init_br2" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:58.540 Cannot find device "nvmf_tgt_br" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.540 Cannot find device "nvmf_tgt_br2" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:58.540 Cannot find device "nvmf_init_br" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:58.540 Cannot find device "nvmf_init_br2" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:58.540 Cannot find device "nvmf_tgt_br" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:58.540 Cannot find device "nvmf_tgt_br2" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:58.540 Cannot find device "nvmf_br" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:58.540 Cannot find device "nvmf_init_if" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:58.540 Cannot find device "nvmf_init_if2" 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:58.540 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:58.807 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:58.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:58.808 00:09:58.808 --- 10.0.0.3 ping statistics --- 00:09:58.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.808 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:58.808 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:58.808 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:09:58.808 00:09:58.808 --- 10.0.0.4 ping statistics --- 00:09:58.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.808 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:09:58.808 00:09:58.808 --- 10.0.0.1 ping statistics --- 00:09:58.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.808 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:58.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:09:58.808 00:09:58.808 --- 10.0.0.2 ping statistics --- 00:09:58.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.808 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64722 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64722 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64722 ']' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.808 09:47:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:58.808 [2024-12-06 09:47:24.055543] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:58.808 [2024-12-06 09:47:24.055649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.067 [2024-12-06 09:47:24.210950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.067 [2024-12-06 09:47:24.283365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.067 [2024-12-06 09:47:24.283610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.067 [2024-12-06 09:47:24.283783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.067 [2024-12-06 09:47:24.283932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.067 [2024-12-06 09:47:24.283983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.067 [2024-12-06 09:47:24.285357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.067 [2024-12-06 09:47:24.285513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.067 [2024-12-06 09:47:24.286388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.067 [2024-12-06 09:47:24.286451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.326 [2024-12-06 09:47:24.345948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.895 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.895 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:59.895 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.895 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.895 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:59.895 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.895 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:00.153 [2024-12-06 09:47:25.413969] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.410 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:00.410 Malloc0 00:10:00.668 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:00.926 09:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.926 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:01.185 [2024-12-06 09:47:26.440369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:01.444 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:01.704 [2024-12-06 09:47:26.736844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:01.704 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:01.704 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:01.964 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:01.964 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:10:01.964 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.964 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:01.964 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64817 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:03.869 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:03.869 [global] 00:10:03.869 thread=1 00:10:03.869 invalidate=1 00:10:03.869 rw=randrw 00:10:03.869 time_based=1 00:10:03.869 runtime=6 00:10:03.869 ioengine=libaio 00:10:03.869 direct=1 00:10:03.869 bs=4096 00:10:03.869 iodepth=128 00:10:03.869 norandommap=0 00:10:03.869 numjobs=1 00:10:03.869 00:10:03.869 verify_dump=1 00:10:03.869 verify_backlog=512 00:10:03.869 verify_state_save=0 00:10:03.869 do_verify=1 00:10:03.869 verify=crc32c-intel 00:10:03.869 [job0] 00:10:03.869 filename=/dev/nvme0n1 00:10:03.869 Could not set queue depth (nvme0n1) 00:10:04.128 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.128 fio-3.35 00:10:04.128 Starting 1 thread 00:10:05.117 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:05.117 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:05.682 09:47:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:06.248 09:47:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64817 00:10:10.441 00:10:10.441 job0: (groupid=0, jobs=1): err= 0: pid=64838: Fri Dec 6 09:47:35 2024 00:10:10.441 read: IOPS=9295, BW=36.3MiB/s (38.1MB/s)(218MiB/6008msec) 00:10:10.441 slat (usec): min=4, max=6988, avg=64.05, stdev=251.91 00:10:10.441 clat (usec): min=2050, max=15919, avg=9414.24, stdev=1597.55 00:10:10.441 lat (usec): min=2060, max=15930, avg=9478.29, stdev=1602.22 00:10:10.441 clat percentiles (usec): 00:10:10.441 | 1.00th=[ 5014], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[ 8586], 00:10:10.441 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:10:10.441 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[13173], 00:10:10.441 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15533], 99.95th=[15664], 00:10:10.441 | 99.99th=[15795] 00:10:10.441 bw ( KiB/s): min= 4952, max=25048, per=50.71%, avg=18857.09, stdev=6570.69, samples=11 00:10:10.441 iops : min= 1238, max= 6262, avg=4714.27, stdev=1642.67, samples=11 00:10:10.441 write: IOPS=5571, BW=21.8MiB/s (22.8MB/s)(113MiB/5180msec); 0 zone resets 00:10:10.441 slat (usec): min=17, max=3223, avg=72.86, stdev=180.81 00:10:10.441 clat (usec): min=2933, max=15650, avg=8175.71, stdev=1396.50 00:10:10.441 lat (usec): min=2964, max=15686, avg=8248.57, stdev=1400.21 00:10:10.441 clat percentiles (usec): 00:10:10.441 | 1.00th=[ 3884], 5.00th=[ 5014], 10.00th=[ 6783], 20.00th=[ 7504], 00:10:10.441 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8586], 00:10:10.441 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[ 9765], 00:10:10.441 | 99.00th=[12387], 99.50th=[13042], 99.90th=[15008], 99.95th=[15139], 00:10:10.441 | 99.99th=[15533] 00:10:10.441 bw ( KiB/s): min= 5256, max=24576, per=84.72%, avg=18880.36, stdev=6270.79, samples=11 00:10:10.441 iops : min= 1314, max= 6144, avg=4720.09, stdev=1567.70, samples=11 00:10:10.441 lat (msec) : 4=0.65%, 10=83.35%, 20=16.00% 00:10:10.441 cpu : usr=5.78%, sys=20.49%, ctx=5176, majf=0, minf=102 00:10:10.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:10.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.441 issued rwts: total=55848,28859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.441 00:10:10.441 Run status group 0 (all jobs): 00:10:10.441 READ: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=218MiB (229MB), run=6008-6008msec 00:10:10.441 WRITE: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s), io=113MiB (118MB), run=5180-5180msec 00:10:10.441 00:10:10.441 Disk stats (read/write): 00:10:10.441 nvme0n1: ios=55049/28302, merge=0/0, ticks=496555/216755, in_queue=713310, util=98.70% 00:10:10.441 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:10.441 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64920 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:10.701 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:10.962 [global] 00:10:10.962 thread=1 00:10:10.962 invalidate=1 00:10:10.962 rw=randrw 00:10:10.962 time_based=1 00:10:10.962 runtime=6 00:10:10.962 ioengine=libaio 00:10:10.962 direct=1 00:10:10.962 bs=4096 00:10:10.962 iodepth=128 00:10:10.962 norandommap=0 00:10:10.962 numjobs=1 00:10:10.962 00:10:10.962 verify_dump=1 00:10:10.962 verify_backlog=512 00:10:10.962 verify_state_save=0 00:10:10.962 do_verify=1 00:10:10.962 verify=crc32c-intel 00:10:10.962 [job0] 00:10:10.962 filename=/dev/nvme0n1 00:10:10.962 Could not set queue depth (nvme0n1) 00:10:10.962 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.962 fio-3.35 00:10:10.962 Starting 1 thread 00:10:11.901 09:47:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:12.162 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:12.421 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:12.680 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:12.940 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64920 00:10:17.133 00:10:17.133 job0: (groupid=0, jobs=1): err= 0: pid=64941: Fri Dec 6 09:47:42 2024 00:10:17.133 read: IOPS=9885, BW=38.6MiB/s (40.5MB/s)(232MiB/6007msec) 00:10:17.133 slat (usec): min=7, max=6791, avg=49.63, stdev=219.09 00:10:17.133 clat (usec): min=421, max=19413, avg=8832.20, stdev=2169.15 00:10:17.133 lat (usec): min=436, max=19427, avg=8881.84, stdev=2178.06 00:10:17.133 clat percentiles (usec): 00:10:17.133 | 1.00th=[ 3589], 5.00th=[ 5211], 10.00th=[ 6194], 20.00th=[ 7570], 00:10:17.133 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:10:17.133 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[11338], 95.00th=[13042], 00:10:17.133 | 99.00th=[15401], 99.50th=[16188], 99.90th=[17957], 99.95th=[18482], 00:10:17.133 | 99.99th=[19268] 00:10:17.133 bw ( KiB/s): min= 7848, max=26072, per=52.14%, avg=20618.18, stdev=5668.60, samples=11 00:10:17.133 iops : min= 1962, max= 6518, avg=5154.55, stdev=1417.15, samples=11 00:10:17.133 write: IOPS=5725, BW=22.4MiB/s (23.5MB/s)(123MiB/5499msec); 0 zone resets 00:10:17.133 slat (usec): min=17, max=2604, avg=62.13, stdev=152.69 00:10:17.133 clat (usec): min=1274, max=18166, avg=7561.25, stdev=1828.36 00:10:17.133 lat (usec): min=1349, max=18190, avg=7623.38, stdev=1839.82 00:10:17.133 clat percentiles (usec): 00:10:17.133 | 1.00th=[ 3163], 5.00th=[ 4047], 10.00th=[ 4686], 20.00th=[ 6128], 00:10:17.133 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8160], 00:10:17.133 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10028], 00:10:17.133 | 99.00th=[12125], 99.50th=[13173], 99.90th=[15139], 99.95th=[15664], 00:10:17.133 | 99.99th=[16909] 00:10:17.133 bw ( KiB/s): min= 8368, max=25776, per=90.13%, avg=20640.00, stdev=5420.24, samples=11 00:10:17.133 iops : min= 2092, max= 6444, avg=5160.00, stdev=1355.06, samples=11 00:10:17.133 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:10:17.133 lat (msec) : 2=0.19%, 4=2.33%, 10=83.31%, 20=14.14% 00:10:17.133 cpu : usr=6.06%, sys=22.73%, ctx=5090, majf=0, minf=54 00:10:17.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:17.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.133 issued rwts: total=59380,31483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.133 00:10:17.133 Run status group 0 (all jobs): 00:10:17.133 READ: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=232MiB (243MB), run=6007-6007msec 00:10:17.133 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=123MiB (129MB), run=5499-5499msec 00:10:17.133 00:10:17.133 Disk stats (read/write): 00:10:17.133 nvme0n1: ios=58772/30645, merge=0/0, ticks=497563/216909, in_queue=714472, util=98.66% 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:17.133 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.703 rmmod nvme_tcp 00:10:17.703 rmmod nvme_fabrics 00:10:17.703 rmmod nvme_keyring 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64722 ']' 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64722 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64722 ']' 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64722 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64722 00:10:17.703 killing process with pid 64722 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64722' 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64722 00:10:17.703 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64722 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:17.963 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:18.222 00:10:18.222 real 0m19.981s 00:10:18.222 user 1m14.993s 00:10:18.222 sys 0m8.960s 00:10:18.222 ************************************ 00:10:18.222 END TEST nvmf_target_multipath 00:10:18.222 ************************************ 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.222 ************************************ 00:10:18.222 START TEST nvmf_zcopy 00:10:18.222 ************************************ 00:10:18.222 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:18.222 * Looking for test storage... 00:10:18.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.483 --rc genhtml_branch_coverage=1 00:10:18.483 --rc genhtml_function_coverage=1 00:10:18.483 --rc genhtml_legend=1 00:10:18.483 --rc geninfo_all_blocks=1 00:10:18.483 --rc geninfo_unexecuted_blocks=1 00:10:18.483 00:10:18.483 ' 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.483 --rc genhtml_branch_coverage=1 00:10:18.483 --rc genhtml_function_coverage=1 00:10:18.483 --rc genhtml_legend=1 00:10:18.483 --rc geninfo_all_blocks=1 00:10:18.483 --rc geninfo_unexecuted_blocks=1 00:10:18.483 00:10:18.483 ' 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.483 --rc genhtml_branch_coverage=1 00:10:18.483 --rc genhtml_function_coverage=1 00:10:18.483 --rc genhtml_legend=1 00:10:18.483 --rc geninfo_all_blocks=1 00:10:18.483 --rc geninfo_unexecuted_blocks=1 00:10:18.483 00:10:18.483 ' 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.483 --rc genhtml_branch_coverage=1 00:10:18.483 --rc genhtml_function_coverage=1 00:10:18.483 --rc genhtml_legend=1 00:10:18.483 --rc geninfo_all_blocks=1 00:10:18.483 --rc geninfo_unexecuted_blocks=1 00:10:18.483 00:10:18.483 ' 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.483 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.484 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:18.484 Cannot find device "nvmf_init_br" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:18.484 Cannot find device "nvmf_init_br2" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:18.484 Cannot find device "nvmf_tgt_br" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.484 Cannot find device "nvmf_tgt_br2" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:18.484 Cannot find device "nvmf_init_br" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:18.484 Cannot find device "nvmf_init_br2" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:18.484 Cannot find device "nvmf_tgt_br" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:18.484 Cannot find device "nvmf_tgt_br2" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:18.484 Cannot find device "nvmf_br" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:18.484 Cannot find device "nvmf_init_if" 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:18.484 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:18.484 Cannot find device "nvmf_init_if2" 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:18.744 09:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.744 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.003 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.003 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:19.003 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:19.003 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:19.003 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.003 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:19.003 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:19.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:10:19.003 00:10:19.003 --- 10.0.0.3 ping statistics --- 00:10:19.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.003 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:19.003 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:19.003 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:19.003 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:10:19.003 00:10:19.003 --- 10.0.0.4 ping statistics --- 00:10:19.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.003 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:19.004 00:10:19.004 --- 10.0.0.1 ping statistics --- 00:10:19.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.004 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:19.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:19.004 00:10:19.004 --- 10.0.0.2 ping statistics --- 00:10:19.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.004 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65248 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65248 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65248 ']' 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.004 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.004 [2024-12-06 09:47:44.142380] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:19.004 [2024-12-06 09:47:44.142524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.263 [2024-12-06 09:47:44.296075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.263 [2024-12-06 09:47:44.360142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.263 [2024-12-06 09:47:44.360233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.263 [2024-12-06 09:47:44.360258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.263 [2024-12-06 09:47:44.360266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.263 [2024-12-06 09:47:44.360273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.263 [2024-12-06 09:47:44.360745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.263 [2024-12-06 09:47:44.437556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:19.832 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.832 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:19.832 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.832 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.832 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.090 [2024-12-06 09:47:45.122441] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.090 [2024-12-06 09:47:45.139274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.090 malloc0 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:20.090 { 00:10:20.090 "params": { 00:10:20.090 "name": "Nvme$subsystem", 00:10:20.090 "trtype": "$TEST_TRANSPORT", 00:10:20.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.090 "adrfam": "ipv4", 00:10:20.090 "trsvcid": "$NVMF_PORT", 00:10:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.090 "hdgst": ${hdgst:-false}, 00:10:20.090 "ddgst": ${ddgst:-false} 00:10:20.090 }, 00:10:20.090 "method": "bdev_nvme_attach_controller" 00:10:20.090 } 00:10:20.090 EOF 00:10:20.090 )") 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:20.090 09:47:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:20.090 "params": { 00:10:20.090 "name": "Nvme1", 00:10:20.090 "trtype": "tcp", 00:10:20.090 "traddr": "10.0.0.3", 00:10:20.090 "adrfam": "ipv4", 00:10:20.090 "trsvcid": "4420", 00:10:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.090 "hdgst": false, 00:10:20.090 "ddgst": false 00:10:20.090 }, 00:10:20.090 "method": "bdev_nvme_attach_controller" 00:10:20.090 }' 00:10:20.090 [2024-12-06 09:47:45.252645] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:20.090 [2024-12-06 09:47:45.252990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65281 ] 00:10:20.349 [2024-12-06 09:47:45.408563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.349 [2024-12-06 09:47:45.464679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.349 [2024-12-06 09:47:45.531782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:20.607 Running I/O for 10 seconds... 00:10:22.481 5606.00 IOPS, 43.80 MiB/s [2024-12-06T09:47:48.690Z] 5608.00 IOPS, 43.81 MiB/s [2024-12-06T09:47:50.066Z] 5630.00 IOPS, 43.98 MiB/s [2024-12-06T09:47:51.001Z] 5625.25 IOPS, 43.95 MiB/s [2024-12-06T09:47:51.939Z] 5580.00 IOPS, 43.59 MiB/s [2024-12-06T09:47:52.876Z] 5564.17 IOPS, 43.47 MiB/s [2024-12-06T09:47:53.816Z] 5513.14 IOPS, 43.07 MiB/s [2024-12-06T09:47:54.755Z] 5483.50 IOPS, 42.84 MiB/s [2024-12-06T09:47:55.692Z] 5462.11 IOPS, 42.67 MiB/s [2024-12-06T09:47:55.692Z] 5449.70 IOPS, 42.58 MiB/s 00:10:30.420 Latency(us) 00:10:30.420 [2024-12-06T09:47:55.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.420 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:30.420 Verification LBA range: start 0x0 length 0x1000 00:10:30.420 Nvme1n1 : 10.02 5450.98 42.59 0.00 0.00 23415.55 1310.72 30384.87 00:10:30.420 [2024-12-06T09:47:55.692Z] =================================================================================================================== 00:10:30.420 [2024-12-06T09:47:55.692Z] Total : 5450.98 42.59 0.00 0.00 23415.55 1310.72 30384.87 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65397 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:30.678 { 00:10:30.678 "params": { 00:10:30.678 "name": "Nvme$subsystem", 00:10:30.678 "trtype": "$TEST_TRANSPORT", 00:10:30.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.678 "adrfam": "ipv4", 00:10:30.678 "trsvcid": "$NVMF_PORT", 00:10:30.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.678 "hdgst": ${hdgst:-false}, 00:10:30.678 "ddgst": ${ddgst:-false} 00:10:30.678 }, 00:10:30.678 "method": "bdev_nvme_attach_controller" 00:10:30.678 } 00:10:30.678 EOF 00:10:30.678 )") 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:30.678 [2024-12-06 09:47:55.889367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.678 [2024-12-06 09:47:55.889427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:30.678 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:30.678 "params": { 00:10:30.678 "name": "Nvme1", 00:10:30.678 "trtype": "tcp", 00:10:30.678 "traddr": "10.0.0.3", 00:10:30.678 "adrfam": "ipv4", 00:10:30.678 "trsvcid": "4420", 00:10:30.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.678 "hdgst": false, 00:10:30.678 "ddgst": false 00:10:30.678 }, 00:10:30.678 "method": "bdev_nvme_attach_controller" 00:10:30.678 }' 00:10:30.678 [2024-12-06 09:47:55.901304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.678 [2024-12-06 09:47:55.901336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.678 [2024-12-06 09:47:55.913308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.678 [2024-12-06 09:47:55.913338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.678 [2024-12-06 09:47:55.925309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.678 [2024-12-06 09:47:55.925488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.678 [2024-12-06 09:47:55.937316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.678 [2024-12-06 09:47:55.937363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.678 [2024-12-06 09:47:55.945804] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:30.678 [2024-12-06 09:47:55.945894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65397 ] 00:10:30.938 [2024-12-06 09:47:55.949311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:55.949341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.938 [2024-12-06 09:47:55.961316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:55.961346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.938 [2024-12-06 09:47:55.973313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:55.973340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.938 [2024-12-06 09:47:55.985324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:55.985355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.938 [2024-12-06 09:47:55.997340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:55.997369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.938 [2024-12-06 09:47:56.009345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:56.009375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.938 [2024-12-06 09:47:56.021333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:56.021367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.938 [2024-12-06 09:47:56.033335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:56.033363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.938 [2024-12-06 09:47:56.045358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.938 [2024-12-06 09:47:56.045597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.057347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.057377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.069348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.069378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.081350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.081380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.092743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.939 [2024-12-06 09:47:56.093375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.093408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.105369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.105398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.117386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.117419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.129390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.129425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.141395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.141431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.144653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.939 [2024-12-06 09:47:56.153402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.153441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.165403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.165445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.177408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.177445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.189409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.189444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.201416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.939 [2024-12-06 09:47:56.201453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.939 [2024-12-06 09:47:56.206502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.199 [2024-12-06 09:47:56.213416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.213634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.225428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.225604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.237426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.237588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.249429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.249592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.261449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.261626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.273457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.273650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.285464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.285501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.297478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.297518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.309492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.309528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.321502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.321543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 Running I/O for 5 seconds... 00:10:31.199 [2024-12-06 09:47:56.333502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.333538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.350605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.350659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.366023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.366066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.381687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.381731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.399659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.399706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.414594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.414644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.430146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.430187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.446969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.447020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.199 [2024-12-06 09:47:56.463675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.199 [2024-12-06 09:47:56.463717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.458 [2024-12-06 09:47:56.479692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.458 [2024-12-06 09:47:56.479735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.458 [2024-12-06 09:47:56.496264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.458 [2024-12-06 09:47:56.496316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.458 [2024-12-06 09:47:56.512908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.512954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.529966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.530015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.545143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.545187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.554713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.554752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.571328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.571374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.585113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.585155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.601200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.601250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.618563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.618627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.634661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.634720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.652944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.652999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.667900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.667950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.677942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.677981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.692923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.693197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.703992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.704164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.459 [2024-12-06 09:47:56.719763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.459 [2024-12-06 09:47:56.719815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.734856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.735126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.744617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.744654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.760549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.760610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.777643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.777694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.793928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.793981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.810553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.810621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.827171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.827229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.843430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.843481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.861489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.861799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.876809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.877044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.893006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.893057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.909640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.909697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.926391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.926450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.943298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.943358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.959648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.959700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.729 [2024-12-06 09:47:56.977108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.729 [2024-12-06 09:47:56.977171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:56.993091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:56.993142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.012021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.012075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.027245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.027314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.045108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.045170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.059960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.060024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.075260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.075310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.085095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.085152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.101281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.101348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.116112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.116177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.132097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.132160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.148443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.148496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.165220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.165282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.183497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.183558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.199378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.199443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.218474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.218535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.233829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.233897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.250825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.250889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.000 [2024-12-06 09:47:57.266969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.000 [2024-12-06 09:47:57.267051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.276667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.276714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.293149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.293205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.308845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.308911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.326272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.326340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 10963.00 IOPS, 85.65 MiB/s [2024-12-06T09:47:57.532Z] [2024-12-06 09:47:57.342799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.342855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.359523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.359609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.378705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.378772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.393988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.394040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.410754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.410802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.428187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.428238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.443239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.443287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.459377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.459424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.476397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.476445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.493971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.494015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.509533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.509607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.260 [2024-12-06 09:47:57.526154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.260 [2024-12-06 09:47:57.526208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.541959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.542015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.551804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.551871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.566866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.566919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.577978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.578044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.593088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.593138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.609293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.609343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.619243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.619283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.635592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.635644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.651748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.651808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.671957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.672009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.686223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.686283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.701455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.701514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.717216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.717260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.735480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.735537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.750455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.750498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.760364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.760402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.775201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.775242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.520 [2024-12-06 09:47:57.790293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.520 [2024-12-06 09:47:57.790338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.806529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.806590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.824792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.824849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.839781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.839855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.850009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.850049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.865974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.866018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.882541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.882596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.900999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.901046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.916094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.916151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.925927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.925967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.942251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.779 [2024-12-06 09:47:57.942306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.779 [2024-12-06 09:47:57.957673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.780 [2024-12-06 09:47:57.957718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.780 [2024-12-06 09:47:57.967635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.780 [2024-12-06 09:47:57.967679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.780 [2024-12-06 09:47:57.983204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.780 [2024-12-06 09:47:57.983249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.780 [2024-12-06 09:47:57.999602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.780 [2024-12-06 09:47:57.999642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.780 [2024-12-06 09:47:58.016448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.780 [2024-12-06 09:47:58.016494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.780 [2024-12-06 09:47:58.033283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.780 [2024-12-06 09:47:58.033342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.780 [2024-12-06 09:47:58.048744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.780 [2024-12-06 09:47:58.048786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.039 [2024-12-06 09:47:58.063892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.039 [2024-12-06 09:47:58.063937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.039 [2024-12-06 09:47:58.073969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.039 [2024-12-06 09:47:58.074011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.039 [2024-12-06 09:47:58.089147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.039 [2024-12-06 09:47:58.089191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.039 [2024-12-06 09:47:58.100204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.039 [2024-12-06 09:47:58.100243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.039 [2024-12-06 09:47:58.115527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.039 [2024-12-06 09:47:58.115598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.039 [2024-12-06 09:47:58.130942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.039 [2024-12-06 09:47:58.130986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.039 [2024-12-06 09:47:58.146853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.039 [2024-12-06 09:47:58.146895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.163522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.163592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.179703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.179760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.198108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.198166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.213221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.213267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.223240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.223278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.238628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.238668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.255177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.255219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.265091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.265130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.281324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.281363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.040 [2024-12-06 09:47:58.296311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.040 [2024-12-06 09:47:58.296352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.311563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.311615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.329372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.329413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 11069.00 IOPS, 86.48 MiB/s [2024-12-06T09:47:58.572Z] [2024-12-06 09:47:58.343187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.343223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.358949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.358987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.376043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.376085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.392608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.392645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.408514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.408553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.418612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.418646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.433579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.433612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.448941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.448980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.458232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.458265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.473603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.473636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.484666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.484696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.499506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.499540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.516046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.516078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.534982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.535017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.548888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.548922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.300 [2024-12-06 09:47:58.565120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.300 [2024-12-06 09:47:58.565154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.580836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.580869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.590476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.590507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.605431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.605462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.615835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.615865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.631521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.631556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.648078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.648121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.664664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.664696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.681303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.681336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.559 [2024-12-06 09:47:58.698210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.559 [2024-12-06 09:47:58.698242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.560 [2024-12-06 09:47:58.714165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.560 [2024-12-06 09:47:58.714198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.560 [2024-12-06 09:47:58.731019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.560 [2024-12-06 09:47:58.731051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.560 [2024-12-06 09:47:58.747744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.560 [2024-12-06 09:47:58.747773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.560 [2024-12-06 09:47:58.763210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.560 [2024-12-06 09:47:58.763241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.560 [2024-12-06 09:47:58.773742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.560 [2024-12-06 09:47:58.773772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.560 [2024-12-06 09:47:58.789824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.560 [2024-12-06 09:47:58.789855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.560 [2024-12-06 09:47:58.804194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.560 [2024-12-06 09:47:58.804224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.560 [2024-12-06 09:47:58.819883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.560 [2024-12-06 09:47:58.819914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.838501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.838549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.854489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.854522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.872892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.872924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.888310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.888340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.897990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.898021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.915189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.915222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.930954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.930983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.949383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.949415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.964607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.964639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.984405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.984438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:58.999378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:58.999410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:59.010199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:59.010230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:59.026221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:59.026251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:59.041531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:59.041581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:59.058866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:59.058903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.819 [2024-12-06 09:47:59.075347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.819 [2024-12-06 09:47:59.075382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.078 [2024-12-06 09:47:59.091676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.078 [2024-12-06 09:47:59.091712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.078 [2024-12-06 09:47:59.108623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.078 [2024-12-06 09:47:59.108660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.078 [2024-12-06 09:47:59.124743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.078 [2024-12-06 09:47:59.124779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.078 [2024-12-06 09:47:59.134802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.078 [2024-12-06 09:47:59.134839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.078 [2024-12-06 09:47:59.150528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.078 [2024-12-06 09:47:59.150806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.078 [2024-12-06 09:47:59.166650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.078 [2024-12-06 09:47:59.166688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.078 [2024-12-06 09:47:59.182146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.078 [2024-12-06 09:47:59.182338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.078 [2024-12-06 09:47:59.192507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.192545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.207783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.207818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.224273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.224312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.242758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.242804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.258039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.258084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.268222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.268272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.285340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.285375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.299653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.299690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.315021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.315282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 [2024-12-06 09:47:59.331411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.331562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.079 11023.33 IOPS, 86.12 MiB/s [2024-12-06T09:47:59.351Z] [2024-12-06 09:47:59.348280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.079 [2024-12-06 09:47:59.348456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.365557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.365758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.382597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.382811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.398262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.398440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.408741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.408915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.423636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.423821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.440675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.440883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.456787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.456973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.473421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.473649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.489934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.490144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.507114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.507321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.522425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.522647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.540867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.541102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.556938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.557210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.574067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.574313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.592437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.592700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.338 [2024-12-06 09:47:59.607950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.338 [2024-12-06 09:47:59.608004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.617666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.617717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.632648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.632692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.648596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.648642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.666497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.666561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.681983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.682030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.698420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.698466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.714426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.714477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.731141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.731208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.747630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.747679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.763312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.763369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.773430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.773472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.789967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.790007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.805102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.805142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.821118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.821158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.836592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.836628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.598 [2024-12-06 09:47:59.853026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.598 [2024-12-06 09:47:59.853079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.869314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.869354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.885897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.885953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.902925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.902965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.917417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.917460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.933733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.933776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.949856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.949898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.968011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.968053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.983230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.983270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:47:59.992812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:47:59.992847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:48:00.008840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:48:00.008878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:48:00.025249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:48:00.025291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:48:00.042036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:48:00.042078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:48:00.058282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:48:00.058324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:48:00.074891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:48:00.074933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:48:00.093900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:48:00.093948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:48:00.108901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:48:00.108948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.857 [2024-12-06 09:48:00.117900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.857 [2024-12-06 09:48:00.117946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.116 [2024-12-06 09:48:00.135387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.116 [2024-12-06 09:48:00.135427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.116 [2024-12-06 09:48:00.151166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.116 [2024-12-06 09:48:00.151209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.116 [2024-12-06 09:48:00.160812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.116 [2024-12-06 09:48:00.160848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.116 [2024-12-06 09:48:00.177459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.116 [2024-12-06 09:48:00.177505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.116 [2024-12-06 09:48:00.192438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.116 [2024-12-06 09:48:00.192489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.116 [2024-12-06 09:48:00.208675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.116 [2024-12-06 09:48:00.208715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.116 [2024-12-06 09:48:00.224911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.224955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.241347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.241410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.257715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.257761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.276619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.276668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.290308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.290361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.305779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.305826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.315384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.315436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.331660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.331711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 11053.00 IOPS, 86.35 MiB/s [2024-12-06T09:48:00.389Z] [2024-12-06 09:48:00.348600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.348640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.364703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.364739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.117 [2024-12-06 09:48:00.381356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.117 [2024-12-06 09:48:00.381404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.399940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.399982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.415039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.415081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.425201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.425248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.440656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.440691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.457227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.457264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.476024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.476068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.490944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.490981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.501107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.501157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.516479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.516526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.532979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.533016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.542432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.542467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.557643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.557677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.568129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.568162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.581746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.581778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.597256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.597291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.607132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.607167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.623780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.623829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.376 [2024-12-06 09:48:00.638840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.376 [2024-12-06 09:48:00.638875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.655153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.655189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.671952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.671988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.688467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.688503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.704968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.705002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.723767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.723801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.739046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.739094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.755993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.756028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.771599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.771634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.782383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.782416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.797451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.797482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.813835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.813874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.830556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.830613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.848503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.848535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.863202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.863267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.878805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.878841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.889903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.889938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.636 [2024-12-06 09:48:00.905801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.636 [2024-12-06 09:48:00.905834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:00.922039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:00.922070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:00.937814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:00.937848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:00.949289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:00.949321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:00.965848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:00.965882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:00.981051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:00.981086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:00.991567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:00.991611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.006689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.006752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.023596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.023644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.039952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.039986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.055462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.055510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.071876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.071911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.089121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.089157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.105267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.105299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.114335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.114364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.131430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.131493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.146770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.146805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.895 [2024-12-06 09:48:01.162831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.895 [2024-12-06 09:48:01.162864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.178105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.178135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.194266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.194308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.209333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.209366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.224546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.224605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.241734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.241809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.254039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.254070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.271824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.271867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.287890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.287945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.298305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.298335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.311735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.311766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 [2024-12-06 09:48:01.326231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.155 [2024-12-06 09:48:01.326261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.155 11001.20 IOPS, 85.95 MiB/s 00:10:36.155 Latency(us) 00:10:36.155 [2024-12-06T09:48:01.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.156 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:36.156 Nvme1n1 : 5.00 11017.62 86.08 0.00 0.00 11609.99 4468.36 20137.43 00:10:36.156 [2024-12-06T09:48:01.428Z] =================================================================================================================== 00:10:36.156 [2024-12-06T09:48:01.428Z] Total : 11017.62 86.08 0.00 0.00 11609.99 4468.36 20137.43 00:10:36.156 [2024-12-06 09:48:01.336999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.156 [2024-12-06 09:48:01.337030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.156 [2024-12-06 09:48:01.349011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.156 [2024-12-06 09:48:01.349059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.156 [2024-12-06 09:48:01.361004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.156 [2024-12-06 09:48:01.361064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.156 [2024-12-06 09:48:01.373008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.156 [2024-12-06 09:48:01.373039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.156 [2024-12-06 09:48:01.385014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.156 [2024-12-06 09:48:01.385043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.156 [2024-12-06 09:48:01.397018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.156 [2024-12-06 09:48:01.397048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.156 [2024-12-06 09:48:01.409024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.156 [2024-12-06 09:48:01.409052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.156 [2024-12-06 09:48:01.421027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.156 [2024-12-06 09:48:01.421057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.433028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.433057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.445031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.445060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.457037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.457064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.469051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.469094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.481118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.481165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.493108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.493135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.505094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.505122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.517131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.517161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.529103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.529145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 [2024-12-06 09:48:01.541101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.415 [2024-12-06 09:48:01.541160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.415 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65397) - No such process 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65397 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.415 delay0 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.415 09:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:36.674 [2024-12-06 09:48:01.758572] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:43.248 Initializing NVMe Controllers 00:10:43.248 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.248 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.248 Initialization complete. Launching workers. 00:10:43.248 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 84 00:10:43.248 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 371, failed to submit 33 00:10:43.248 success 233, unsuccessful 138, failed 0 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.248 rmmod nvme_tcp 00:10:43.248 rmmod nvme_fabrics 00:10:43.248 rmmod nvme_keyring 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65248 ']' 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65248 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65248 ']' 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65248 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:43.248 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.249 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65248 00:10:43.249 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:43.249 killing process with pid 65248 00:10:43.249 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:43.249 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65248' 00:10:43.249 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65248 00:10:43.249 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65248 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:43.249 00:10:43.249 real 0m25.086s 00:10:43.249 user 0m39.614s 00:10:43.249 sys 0m7.730s 00:10:43.249 ************************************ 00:10:43.249 END TEST nvmf_zcopy 00:10:43.249 ************************************ 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.249 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.509 ************************************ 00:10:43.509 START TEST nvmf_nmic 00:10:43.509 ************************************ 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.509 * Looking for test storage... 00:10:43.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:43.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.509 --rc genhtml_branch_coverage=1 00:10:43.509 --rc genhtml_function_coverage=1 00:10:43.509 --rc genhtml_legend=1 00:10:43.509 --rc geninfo_all_blocks=1 00:10:43.509 --rc geninfo_unexecuted_blocks=1 00:10:43.509 00:10:43.509 ' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:43.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.509 --rc genhtml_branch_coverage=1 00:10:43.509 --rc genhtml_function_coverage=1 00:10:43.509 --rc genhtml_legend=1 00:10:43.509 --rc geninfo_all_blocks=1 00:10:43.509 --rc geninfo_unexecuted_blocks=1 00:10:43.509 00:10:43.509 ' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:43.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.509 --rc genhtml_branch_coverage=1 00:10:43.509 --rc genhtml_function_coverage=1 00:10:43.509 --rc genhtml_legend=1 00:10:43.509 --rc geninfo_all_blocks=1 00:10:43.509 --rc geninfo_unexecuted_blocks=1 00:10:43.509 00:10:43.509 ' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:43.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.509 --rc genhtml_branch_coverage=1 00:10:43.509 --rc genhtml_function_coverage=1 00:10:43.509 --rc genhtml_legend=1 00:10:43.509 --rc geninfo_all_blocks=1 00:10:43.509 --rc geninfo_unexecuted_blocks=1 00:10:43.509 00:10:43.509 ' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.509 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.509 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.510 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:43.769 Cannot find device "nvmf_init_br" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:43.769 Cannot find device "nvmf_init_br2" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:43.769 Cannot find device "nvmf_tgt_br" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.769 Cannot find device "nvmf_tgt_br2" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:43.769 Cannot find device "nvmf_init_br" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:43.769 Cannot find device "nvmf_init_br2" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:43.769 Cannot find device "nvmf_tgt_br" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:43.769 Cannot find device "nvmf_tgt_br2" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:43.769 Cannot find device "nvmf_br" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:43.769 Cannot find device "nvmf_init_if" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:43.769 Cannot find device "nvmf_init_if2" 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:43.769 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:43.769 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:43.769 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.029 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:44.030 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.030 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:10:44.030 00:10:44.030 --- 10.0.0.3 ping statistics --- 00:10:44.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.030 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:44.030 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:44.030 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:10:44.030 00:10:44.030 --- 10.0.0.4 ping statistics --- 00:10:44.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.030 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:44.030 00:10:44.030 --- 10.0.0.1 ping statistics --- 00:10:44.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.030 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:44.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:10:44.030 00:10:44.030 --- 10.0.0.2 ping statistics --- 00:10:44.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.030 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65774 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65774 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65774 ']' 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.030 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 [2024-12-06 09:48:09.282287] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:44.030 [2024-12-06 09:48:09.282403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.289 [2024-12-06 09:48:09.441066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.289 [2024-12-06 09:48:09.504827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.289 [2024-12-06 09:48:09.505089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.289 [2024-12-06 09:48:09.505358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.289 [2024-12-06 09:48:09.505505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.289 [2024-12-06 09:48:09.505688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.289 [2024-12-06 09:48:09.507050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.289 [2024-12-06 09:48:09.507104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.289 [2024-12-06 09:48:09.507186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.289 [2024-12-06 09:48:09.507195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.549 [2024-12-06 09:48:09.567993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.549 [2024-12-06 09:48:09.684598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.549 Malloc0 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.549 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.550 [2024-12-06 09:48:09.758057] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:44.550 test case1: single bdev can't be used in multiple subsystems 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.550 [2024-12-06 09:48:09.781877] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:44.550 [2024-12-06 09:48:09.781950] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:44.550 [2024-12-06 09:48:09.781965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.550 request: 00:10:44.550 { 00:10:44.550 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:44.550 "namespace": { 00:10:44.550 "bdev_name": "Malloc0", 00:10:44.550 "no_auto_visible": false, 00:10:44.550 "hide_metadata": false 00:10:44.550 }, 00:10:44.550 "method": "nvmf_subsystem_add_ns", 00:10:44.550 "req_id": 1 00:10:44.550 } 00:10:44.550 Got JSON-RPC error response 00:10:44.550 response: 00:10:44.550 { 00:10:44.550 "code": -32602, 00:10:44.550 "message": "Invalid parameters" 00:10:44.550 } 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:44.550 Adding namespace failed - expected result. 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:44.550 test case2: host connect to nvmf target in multiple paths 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.550 [2024-12-06 09:48:09.798046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.550 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:44.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:44.809 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.809 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:44.809 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.809 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:44.809 09:48:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:47.345 09:48:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:47.345 09:48:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:47.345 09:48:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.345 09:48:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:47.345 09:48:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.345 09:48:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:47.345 09:48:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:47.345 [global] 00:10:47.345 thread=1 00:10:47.345 invalidate=1 00:10:47.345 rw=write 00:10:47.345 time_based=1 00:10:47.345 runtime=1 00:10:47.345 ioengine=libaio 00:10:47.345 direct=1 00:10:47.345 bs=4096 00:10:47.345 iodepth=1 00:10:47.345 norandommap=0 00:10:47.345 numjobs=1 00:10:47.345 00:10:47.345 verify_dump=1 00:10:47.345 verify_backlog=512 00:10:47.345 verify_state_save=0 00:10:47.345 do_verify=1 00:10:47.345 verify=crc32c-intel 00:10:47.345 [job0] 00:10:47.345 filename=/dev/nvme0n1 00:10:47.345 Could not set queue depth (nvme0n1) 00:10:47.345 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.345 fio-3.35 00:10:47.345 Starting 1 thread 00:10:48.283 00:10:48.283 job0: (groupid=0, jobs=1): err= 0: pid=65858: Fri Dec 6 09:48:13 2024 00:10:48.283 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:48.283 slat (nsec): min=11173, max=55803, avg=14463.03, stdev=4849.10 00:10:48.283 clat (usec): min=134, max=314, avg=210.23, stdev=31.83 00:10:48.283 lat (usec): min=146, max=338, avg=224.69, stdev=32.24 00:10:48.283 clat percentiles (usec): 00:10:48.283 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 182], 00:10:48.283 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:10:48.283 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 265], 00:10:48.283 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 314], 99.95th=[ 314], 00:10:48.283 | 99.99th=[ 314] 00:10:48.283 write: IOPS=2770, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:10:48.283 slat (usec): min=13, max=118, avg=21.40, stdev= 7.48 00:10:48.283 clat (usec): min=80, max=324, avg=128.36, stdev=25.62 00:10:48.283 lat (usec): min=97, max=442, avg=149.76, stdev=27.74 00:10:48.283 clat percentiles (usec): 00:10:48.283 | 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 98], 20.00th=[ 105], 00:10:48.283 | 30.00th=[ 113], 40.00th=[ 119], 50.00th=[ 125], 60.00th=[ 133], 00:10:48.283 | 70.00th=[ 139], 80.00th=[ 149], 90.00th=[ 163], 95.00th=[ 176], 00:10:48.283 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 253], 99.95th=[ 269], 00:10:48.283 | 99.99th=[ 326] 00:10:48.283 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:48.283 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:48.283 lat (usec) : 100=6.24%, 250=87.98%, 500=5.78% 00:10:48.283 cpu : usr=2.70%, sys=7.00%, ctx=5333, majf=0, minf=5 00:10:48.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.283 issued rwts: total=2560,2773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.283 00:10:48.284 Run status group 0 (all jobs): 00:10:48.284 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:48.284 WRITE: bw=10.8MiB/s (11.3MB/s), 10.8MiB/s-10.8MiB/s (11.3MB/s-11.3MB/s), io=10.8MiB (11.4MB), run=1001-1001msec 00:10:48.284 00:10:48.284 Disk stats (read/write): 00:10:48.284 nvme0n1: ios=2281/2560, merge=0/0, ticks=529/374, in_queue=903, util=91.47% 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.284 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.284 rmmod nvme_tcp 00:10:48.284 rmmod nvme_fabrics 00:10:48.544 rmmod nvme_keyring 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65774 ']' 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65774 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65774 ']' 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65774 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65774 00:10:48.544 killing process with pid 65774 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65774' 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65774 00:10:48.544 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65774 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:48.804 09:48:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:48.804 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:48.804 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:48.804 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.063 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.063 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:49.063 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.063 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.063 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:49.064 00:10:49.064 real 0m5.588s 00:10:49.064 user 0m16.517s 00:10:49.064 sys 0m1.957s 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.064 ************************************ 00:10:49.064 END TEST nvmf_nmic 00:10:49.064 ************************************ 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.064 ************************************ 00:10:49.064 START TEST nvmf_fio_target 00:10:49.064 ************************************ 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:49.064 * Looking for test storage... 00:10:49.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.064 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.324 --rc genhtml_branch_coverage=1 00:10:49.324 --rc genhtml_function_coverage=1 00:10:49.324 --rc genhtml_legend=1 00:10:49.324 --rc geninfo_all_blocks=1 00:10:49.324 --rc geninfo_unexecuted_blocks=1 00:10:49.324 00:10:49.324 ' 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.324 --rc genhtml_branch_coverage=1 00:10:49.324 --rc genhtml_function_coverage=1 00:10:49.324 --rc genhtml_legend=1 00:10:49.324 --rc geninfo_all_blocks=1 00:10:49.324 --rc geninfo_unexecuted_blocks=1 00:10:49.324 00:10:49.324 ' 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.324 --rc genhtml_branch_coverage=1 00:10:49.324 --rc genhtml_function_coverage=1 00:10:49.324 --rc genhtml_legend=1 00:10:49.324 --rc geninfo_all_blocks=1 00:10:49.324 --rc geninfo_unexecuted_blocks=1 00:10:49.324 00:10:49.324 ' 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.324 --rc genhtml_branch_coverage=1 00:10:49.324 --rc genhtml_function_coverage=1 00:10:49.324 --rc genhtml_legend=1 00:10:49.324 --rc geninfo_all_blocks=1 00:10:49.324 --rc geninfo_unexecuted_blocks=1 00:10:49.324 00:10:49.324 ' 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:49.324 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.325 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:49.325 Cannot find device "nvmf_init_br" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:49.325 Cannot find device "nvmf_init_br2" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:49.325 Cannot find device "nvmf_tgt_br" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.325 Cannot find device "nvmf_tgt_br2" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:49.325 Cannot find device "nvmf_init_br" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:49.325 Cannot find device "nvmf_init_br2" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:49.325 Cannot find device "nvmf_tgt_br" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:49.325 Cannot find device "nvmf_tgt_br2" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:49.325 Cannot find device "nvmf_br" 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:49.325 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:49.325 Cannot find device "nvmf_init_if" 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:49.326 Cannot find device "nvmf_init_if2" 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:49.326 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:49.585 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:49.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:49.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:10:49.586 00:10:49.586 --- 10.0.0.3 ping statistics --- 00:10:49.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.586 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:49.586 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:49.586 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:10:49.586 00:10:49.586 --- 10.0.0.4 ping statistics --- 00:10:49.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.586 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:49.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:49.586 00:10:49.586 --- 10.0.0.1 ping statistics --- 00:10:49.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.586 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:49.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:10:49.586 00:10:49.586 --- 10.0.0.2 ping statistics --- 00:10:49.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.586 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66091 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66091 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66091 ']' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.586 09:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.845 [2024-12-06 09:48:14.882292] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:49.845 [2024-12-06 09:48:14.882384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.845 [2024-12-06 09:48:15.037323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.845 [2024-12-06 09:48:15.099187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.845 [2024-12-06 09:48:15.099238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.845 [2024-12-06 09:48:15.099252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.845 [2024-12-06 09:48:15.099262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.845 [2024-12-06 09:48:15.099272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.845 [2024-12-06 09:48:15.100659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.846 [2024-12-06 09:48:15.100718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.846 [2024-12-06 09:48:15.101460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.846 [2024-12-06 09:48:15.101513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.117 [2024-12-06 09:48:15.162744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.117 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.117 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:50.117 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.117 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.117 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.117 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.117 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:50.374 [2024-12-06 09:48:15.589729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.374 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.939 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:50.939 09:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.197 09:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:51.197 09:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.455 09:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:51.455 09:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.714 09:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:51.714 09:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:51.976 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.234 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:52.234 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.495 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:52.495 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.064 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:53.064 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:53.064 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.339 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.339 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.597 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.597 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.857 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:54.115 [2024-12-06 09:48:19.264472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:54.115 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:54.373 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:54.692 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:54.692 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:54.692 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:54.692 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.692 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:54.692 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:54.692 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.238 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.238 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.238 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.238 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:57.238 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.238 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:57.238 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:57.238 [global] 00:10:57.238 thread=1 00:10:57.238 invalidate=1 00:10:57.238 rw=write 00:10:57.238 time_based=1 00:10:57.238 runtime=1 00:10:57.238 ioengine=libaio 00:10:57.238 direct=1 00:10:57.238 bs=4096 00:10:57.238 iodepth=1 00:10:57.238 norandommap=0 00:10:57.238 numjobs=1 00:10:57.238 00:10:57.238 verify_dump=1 00:10:57.238 verify_backlog=512 00:10:57.238 verify_state_save=0 00:10:57.238 do_verify=1 00:10:57.238 verify=crc32c-intel 00:10:57.238 [job0] 00:10:57.238 filename=/dev/nvme0n1 00:10:57.238 [job1] 00:10:57.238 filename=/dev/nvme0n2 00:10:57.238 [job2] 00:10:57.238 filename=/dev/nvme0n3 00:10:57.238 [job3] 00:10:57.238 filename=/dev/nvme0n4 00:10:57.238 Could not set queue depth (nvme0n1) 00:10:57.238 Could not set queue depth (nvme0n2) 00:10:57.238 Could not set queue depth (nvme0n3) 00:10:57.238 Could not set queue depth (nvme0n4) 00:10:57.238 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.238 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.238 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.238 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.238 fio-3.35 00:10:57.238 Starting 4 threads 00:10:58.177 00:10:58.177 job0: (groupid=0, jobs=1): err= 0: pid=66273: Fri Dec 6 09:48:23 2024 00:10:58.177 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:58.177 slat (nsec): min=11105, max=49347, avg=15681.28, stdev=4412.96 00:10:58.177 clat (usec): min=136, max=387, avg=237.21, stdev=46.49 00:10:58.177 lat (usec): min=150, max=400, avg=252.90, stdev=47.46 00:10:58.177 clat percentiles (usec): 00:10:58.177 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 178], 20.00th=[ 194], 00:10:58.177 | 30.00th=[ 208], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 249], 00:10:58.177 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 318], 00:10:58.177 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 383], 99.95th=[ 383], 00:10:58.177 | 99.99th=[ 388] 00:10:58.177 write: IOPS=2471, BW=9886KiB/s (10.1MB/s)(9896KiB/1001msec); 0 zone resets 00:10:58.177 slat (usec): min=15, max=152, avg=23.38, stdev= 7.43 00:10:58.177 clat (usec): min=86, max=862, avg=167.98, stdev=48.72 00:10:58.177 lat (usec): min=103, max=878, avg=191.36, stdev=50.19 00:10:58.177 clat percentiles (usec): 00:10:58.177 | 1.00th=[ 101], 5.00th=[ 111], 10.00th=[ 119], 20.00th=[ 129], 00:10:58.177 | 30.00th=[ 139], 40.00th=[ 151], 50.00th=[ 161], 60.00th=[ 174], 00:10:58.177 | 70.00th=[ 186], 80.00th=[ 202], 90.00th=[ 227], 95.00th=[ 249], 00:10:58.177 | 99.00th=[ 302], 99.50th=[ 347], 99.90th=[ 498], 99.95th=[ 857], 00:10:58.177 | 99.99th=[ 865] 00:10:58.177 bw ( KiB/s): min= 8488, max= 8488, per=23.75%, avg=8488.00, stdev= 0.00, samples=1 00:10:58.177 iops : min= 2122, max= 2122, avg=2122.00, stdev= 0.00, samples=1 00:10:58.177 lat (usec) : 100=0.42%, 250=79.61%, 500=19.92%, 1000=0.04% 00:10:58.177 cpu : usr=1.80%, sys=7.20%, ctx=4524, majf=0, minf=13 00:10:58.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.177 issued rwts: total=2048,2474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.177 job1: (groupid=0, jobs=1): err= 0: pid=66274: Fri Dec 6 09:48:23 2024 00:10:58.177 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:58.177 slat (nsec): min=13081, max=69600, avg=18616.38, stdev=5954.50 00:10:58.177 clat (usec): min=150, max=524, avg=241.77, stdev=43.75 00:10:58.177 lat (usec): min=166, max=541, avg=260.39, stdev=44.58 00:10:58.177 clat percentiles (usec): 00:10:58.177 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 202], 00:10:58.177 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 251], 00:10:58.177 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 318], 00:10:58.177 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 400], 00:10:58.177 | 99.99th=[ 529] 00:10:58.177 write: IOPS=2256, BW=9027KiB/s (9244kB/s)(9036KiB/1001msec); 0 zone resets 00:10:58.177 slat (usec): min=17, max=177, avg=27.20, stdev= 8.14 00:10:58.177 clat (usec): min=96, max=3269, avg=175.21, stdev=77.67 00:10:58.177 lat (usec): min=118, max=3306, avg=202.42, stdev=78.70 00:10:58.177 clat percentiles (usec): 00:10:58.177 | 1.00th=[ 110], 5.00th=[ 120], 10.00th=[ 127], 20.00th=[ 139], 00:10:58.177 | 30.00th=[ 149], 40.00th=[ 159], 50.00th=[ 169], 60.00th=[ 180], 00:10:58.177 | 70.00th=[ 192], 80.00th=[ 206], 90.00th=[ 227], 95.00th=[ 247], 00:10:58.177 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 457], 99.95th=[ 881], 00:10:58.177 | 99.99th=[ 3261] 00:10:58.177 bw ( KiB/s): min= 8192, max= 8192, per=22.92%, avg=8192.00, stdev= 0.00, samples=1 00:10:58.177 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:58.177 lat (usec) : 100=0.07%, 250=78.27%, 500=21.59%, 750=0.02%, 1000=0.02% 00:10:58.177 lat (msec) : 4=0.02% 00:10:58.177 cpu : usr=2.20%, sys=7.60%, ctx=4307, majf=0, minf=9 00:10:58.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.177 issued rwts: total=2048,2259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.177 job2: (groupid=0, jobs=1): err= 0: pid=66275: Fri Dec 6 09:48:23 2024 00:10:58.177 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:58.177 slat (nsec): min=11631, max=62647, avg=15484.76, stdev=4829.67 00:10:58.177 clat (usec): min=154, max=6470, avg=249.56, stdev=173.77 00:10:58.177 lat (usec): min=168, max=6483, avg=265.05, stdev=174.18 00:10:58.177 clat percentiles (usec): 00:10:58.177 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 204], 00:10:58.177 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 251], 00:10:58.177 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 322], 00:10:58.177 | 99.00th=[ 355], 99.50th=[ 375], 99.90th=[ 2933], 99.95th=[ 3621], 00:10:58.177 | 99.99th=[ 6456] 00:10:58.177 write: IOPS=2159, BW=8639KiB/s (8847kB/s)(8648KiB/1001msec); 0 zone resets 00:10:58.177 slat (usec): min=14, max=161, avg=23.35, stdev= 7.45 00:10:58.177 clat (usec): min=101, max=2634, avg=184.11, stdev=80.63 00:10:58.177 lat (usec): min=127, max=2659, avg=207.45, stdev=81.46 00:10:58.177 clat percentiles (usec): 00:10:58.177 | 1.00th=[ 120], 5.00th=[ 129], 10.00th=[ 137], 20.00th=[ 149], 00:10:58.177 | 30.00th=[ 159], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 188], 00:10:58.177 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 233], 95.00th=[ 249], 00:10:58.177 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 644], 99.95th=[ 2376], 00:10:58.177 | 99.99th=[ 2638] 00:10:58.177 bw ( KiB/s): min= 8192, max= 8192, per=22.92%, avg=8192.00, stdev= 0.00, samples=1 00:10:58.177 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:58.177 lat (usec) : 250=77.72%, 500=22.07%, 750=0.02%, 1000=0.07% 00:10:58.177 lat (msec) : 4=0.10%, 10=0.02% 00:10:58.177 cpu : usr=2.30%, sys=6.20%, ctx=4211, majf=0, minf=7 00:10:58.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.177 issued rwts: total=2048,2162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.177 job3: (groupid=0, jobs=1): err= 0: pid=66276: Fri Dec 6 09:48:23 2024 00:10:58.177 read: IOPS=1627, BW=6509KiB/s (6666kB/s)(6516KiB/1001msec) 00:10:58.177 slat (nsec): min=13481, max=79584, avg=19765.62, stdev=5471.38 00:10:58.177 clat (usec): min=199, max=580, avg=284.06, stdev=40.46 00:10:58.177 lat (usec): min=217, max=603, avg=303.83, stdev=41.36 00:10:58.177 clat percentiles (usec): 00:10:58.177 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 251], 00:10:58.177 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:10:58.177 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 359], 00:10:58.177 | 99.00th=[ 400], 99.50th=[ 424], 99.90th=[ 465], 99.95th=[ 578], 00:10:58.177 | 99.99th=[ 578] 00:10:58.177 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:58.177 slat (usec): min=17, max=130, avg=28.81, stdev= 7.64 00:10:58.177 clat (usec): min=141, max=573, avg=213.93, stdev=39.54 00:10:58.177 lat (usec): min=166, max=611, avg=242.74, stdev=41.48 00:10:58.177 clat percentiles (usec): 00:10:58.177 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:10:58.177 | 30.00th=[ 190], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:10:58.177 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 281], 00:10:58.177 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 515], 99.95th=[ 519], 00:10:58.177 | 99.99th=[ 570] 00:10:58.177 bw ( KiB/s): min= 8192, max= 8192, per=22.92%, avg=8192.00, stdev= 0.00, samples=1 00:10:58.177 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:58.177 lat (usec) : 250=56.19%, 500=43.70%, 750=0.11% 00:10:58.177 cpu : usr=1.90%, sys=7.00%, ctx=3677, majf=0, minf=9 00:10:58.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.177 issued rwts: total=1629,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.177 00:10:58.177 Run status group 0 (all jobs): 00:10:58.177 READ: bw=30.3MiB/s (31.8MB/s), 6509KiB/s-8184KiB/s (6666kB/s-8380kB/s), io=30.4MiB (31.8MB), run=1001-1001msec 00:10:58.177 WRITE: bw=34.9MiB/s (36.6MB/s), 8184KiB/s-9886KiB/s (8380kB/s-10.1MB/s), io=34.9MiB (36.6MB), run=1001-1001msec 00:10:58.177 00:10:58.177 Disk stats (read/write): 00:10:58.177 nvme0n1: ios=1817/2048, merge=0/0, ticks=445/367, in_queue=812, util=87.07% 00:10:58.177 nvme0n2: ios=1658/2048, merge=0/0, ticks=426/378, in_queue=804, util=87.50% 00:10:58.177 nvme0n3: ios=1556/2048, merge=0/0, ticks=394/394, in_queue=788, util=88.11% 00:10:58.177 nvme0n4: ios=1536/1593, merge=0/0, ticks=442/363, in_queue=805, util=89.71% 00:10:58.177 09:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:58.177 [global] 00:10:58.177 thread=1 00:10:58.177 invalidate=1 00:10:58.177 rw=randwrite 00:10:58.177 time_based=1 00:10:58.177 runtime=1 00:10:58.177 ioengine=libaio 00:10:58.178 direct=1 00:10:58.178 bs=4096 00:10:58.178 iodepth=1 00:10:58.178 norandommap=0 00:10:58.178 numjobs=1 00:10:58.178 00:10:58.178 verify_dump=1 00:10:58.178 verify_backlog=512 00:10:58.178 verify_state_save=0 00:10:58.178 do_verify=1 00:10:58.178 verify=crc32c-intel 00:10:58.178 [job0] 00:10:58.178 filename=/dev/nvme0n1 00:10:58.178 [job1] 00:10:58.178 filename=/dev/nvme0n2 00:10:58.178 [job2] 00:10:58.178 filename=/dev/nvme0n3 00:10:58.178 [job3] 00:10:58.178 filename=/dev/nvme0n4 00:10:58.437 Could not set queue depth (nvme0n1) 00:10:58.437 Could not set queue depth (nvme0n2) 00:10:58.437 Could not set queue depth (nvme0n3) 00:10:58.437 Could not set queue depth (nvme0n4) 00:10:58.437 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.437 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.437 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.437 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.437 fio-3.35 00:10:58.437 Starting 4 threads 00:10:59.812 00:10:59.812 job0: (groupid=0, jobs=1): err= 0: pid=66329: Fri Dec 6 09:48:24 2024 00:10:59.812 read: IOPS=1694, BW=6777KiB/s (6940kB/s)(6784KiB/1001msec) 00:10:59.812 slat (nsec): min=12658, max=94063, avg=18008.69, stdev=6616.25 00:10:59.812 clat (usec): min=195, max=2478, avg=281.93, stdev=71.29 00:10:59.812 lat (usec): min=214, max=2495, avg=299.94, stdev=71.90 00:10:59.812 clat percentiles (usec): 00:10:59.812 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 243], 00:10:59.812 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 281], 00:10:59.812 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 371], 00:10:59.812 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 701], 99.95th=[ 2474], 00:10:59.812 | 99.99th=[ 2474] 00:10:59.812 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:59.812 slat (usec): min=17, max=218, avg=29.82, stdev=12.98 00:10:59.812 clat (usec): min=112, max=462, avg=205.89, stdev=41.92 00:10:59.812 lat (usec): min=136, max=492, avg=235.71, stdev=45.74 00:10:59.812 clat percentiles (usec): 00:10:59.812 | 1.00th=[ 135], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 169], 00:10:59.812 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 202], 60.00th=[ 212], 00:10:59.812 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 265], 95.00th=[ 281], 00:10:59.812 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 408], 99.95th=[ 412], 00:10:59.812 | 99.99th=[ 461] 00:10:59.812 bw ( KiB/s): min= 8192, max= 8192, per=36.88%, avg=8192.00, stdev= 0.00, samples=1 00:10:59.812 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:59.812 lat (usec) : 250=59.88%, 500=39.98%, 750=0.11% 00:10:59.812 lat (msec) : 4=0.03% 00:10:59.812 cpu : usr=1.50%, sys=7.30%, ctx=3765, majf=0, minf=11 00:10:59.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.812 issued rwts: total=1696,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.812 job1: (groupid=0, jobs=1): err= 0: pid=66330: Fri Dec 6 09:48:24 2024 00:10:59.812 read: IOPS=866, BW=3465KiB/s (3548kB/s)(3468KiB/1001msec) 00:10:59.812 slat (usec): min=10, max=141, avg=35.29, stdev=16.65 00:10:59.812 clat (usec): min=258, max=3565, avg=556.80, stdev=192.60 00:10:59.812 lat (usec): min=287, max=3602, avg=592.09, stdev=196.38 00:10:59.812 clat percentiles (usec): 00:10:59.812 | 1.00th=[ 318], 5.00th=[ 375], 10.00th=[ 404], 20.00th=[ 441], 00:10:59.812 | 30.00th=[ 469], 40.00th=[ 498], 50.00th=[ 523], 60.00th=[ 545], 00:10:59.812 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 766], 95.00th=[ 840], 00:10:59.812 | 99.00th=[ 1057], 99.50th=[ 1598], 99.90th=[ 3556], 99.95th=[ 3556], 00:10:59.812 | 99.99th=[ 3556] 00:10:59.812 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:59.812 slat (nsec): min=18710, max=92837, avg=34994.18, stdev=12248.81 00:10:59.812 clat (usec): min=150, max=777, avg=433.12, stdev=99.17 00:10:59.812 lat (usec): min=187, max=822, avg=468.11, stdev=101.31 00:10:59.812 clat percentiles (usec): 00:10:59.812 | 1.00th=[ 231], 5.00th=[ 302], 10.00th=[ 322], 20.00th=[ 351], 00:10:59.812 | 30.00th=[ 375], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 441], 00:10:59.812 | 70.00th=[ 478], 80.00th=[ 515], 90.00th=[ 578], 95.00th=[ 627], 00:10:59.812 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 766], 99.95th=[ 775], 00:10:59.812 | 99.99th=[ 775] 00:10:59.812 bw ( KiB/s): min= 4096, max= 4096, per=18.44%, avg=4096.00, stdev= 0.00, samples=1 00:10:59.812 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:59.812 lat (usec) : 250=0.90%, 500=59.28%, 750=34.90%, 1000=4.23% 00:10:59.812 lat (msec) : 2=0.63%, 4=0.05% 00:10:59.812 cpu : usr=1.80%, sys=5.30%, ctx=1896, majf=0, minf=11 00:10:59.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.812 issued rwts: total=867,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.812 job2: (groupid=0, jobs=1): err= 0: pid=66331: Fri Dec 6 09:48:24 2024 00:10:59.812 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:59.812 slat (nsec): min=11531, max=85117, avg=22600.23, stdev=8838.38 00:10:59.812 clat (usec): min=189, max=1012, avg=469.16, stdev=79.92 00:10:59.812 lat (usec): min=214, max=1039, avg=491.76, stdev=81.56 00:10:59.812 clat percentiles (usec): 00:10:59.812 | 1.00th=[ 265], 5.00th=[ 355], 10.00th=[ 379], 20.00th=[ 408], 00:10:59.812 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 469], 60.00th=[ 490], 00:10:59.812 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 578], 00:10:59.812 | 99.00th=[ 685], 99.50th=[ 824], 99.90th=[ 1004], 99.95th=[ 1012], 00:10:59.812 | 99.99th=[ 1012] 00:10:59.812 write: IOPS=1305, BW=5223KiB/s (5348kB/s)(5228KiB/1001msec); 0 zone resets 00:10:59.812 slat (usec): min=17, max=107, avg=34.68, stdev=13.27 00:10:59.812 clat (usec): min=143, max=872, avg=339.44, stdev=82.68 00:10:59.812 lat (usec): min=178, max=908, avg=374.12, stdev=87.16 00:10:59.812 clat percentiles (usec): 00:10:59.812 | 1.00th=[ 174], 5.00th=[ 223], 10.00th=[ 247], 20.00th=[ 273], 00:10:59.812 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 347], 00:10:59.812 | 70.00th=[ 379], 80.00th=[ 412], 90.00th=[ 453], 95.00th=[ 486], 00:10:59.812 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 791], 99.95th=[ 873], 00:10:59.812 | 99.99th=[ 873] 00:10:59.812 bw ( KiB/s): min= 4816, max= 4816, per=21.68%, avg=4816.00, stdev= 0.00, samples=1 00:10:59.812 iops : min= 1204, max= 1204, avg=1204.00, stdev= 0.00, samples=1 00:10:59.812 lat (usec) : 250=6.52%, 500=76.53%, 750=16.52%, 1000=0.34% 00:10:59.812 lat (msec) : 2=0.09% 00:10:59.812 cpu : usr=2.20%, sys=5.30%, ctx=2341, majf=0, minf=11 00:10:59.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.812 issued rwts: total=1024,1307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.812 job3: (groupid=0, jobs=1): err= 0: pid=66332: Fri Dec 6 09:48:24 2024 00:10:59.812 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:59.812 slat (usec): min=12, max=101, avg=23.85, stdev= 8.37 00:10:59.812 clat (usec): min=289, max=3949, avg=489.71, stdev=141.37 00:10:59.812 lat (usec): min=309, max=3986, avg=513.56, stdev=142.24 00:10:59.812 clat percentiles (usec): 00:10:59.812 | 1.00th=[ 338], 5.00th=[ 367], 10.00th=[ 383], 20.00th=[ 412], 00:10:59.812 | 30.00th=[ 437], 40.00th=[ 457], 50.00th=[ 482], 60.00th=[ 502], 00:10:59.812 | 70.00th=[ 523], 80.00th=[ 545], 90.00th=[ 586], 95.00th=[ 635], 00:10:59.812 | 99.00th=[ 750], 99.50th=[ 889], 99.90th=[ 1254], 99.95th=[ 3949], 00:10:59.812 | 99.99th=[ 3949] 00:10:59.812 write: IOPS=1178, BW=4715KiB/s (4828kB/s)(4720KiB/1001msec); 0 zone resets 00:10:59.812 slat (nsec): min=19526, max=99985, avg=35750.46, stdev=10954.86 00:10:59.812 clat (usec): min=191, max=892, avg=359.66, stdev=89.40 00:10:59.812 lat (usec): min=214, max=913, avg=395.42, stdev=92.82 00:10:59.812 clat percentiles (usec): 00:10:59.812 | 1.00th=[ 219], 5.00th=[ 239], 10.00th=[ 260], 20.00th=[ 285], 00:10:59.812 | 30.00th=[ 302], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 371], 00:10:59.812 | 70.00th=[ 400], 80.00th=[ 437], 90.00th=[ 478], 95.00th=[ 515], 00:10:59.812 | 99.00th=[ 594], 99.50th=[ 660], 99.90th=[ 807], 99.95th=[ 889], 00:10:59.812 | 99.99th=[ 889] 00:10:59.812 bw ( KiB/s): min= 4096, max= 4096, per=18.44%, avg=4096.00, stdev= 0.00, samples=1 00:10:59.812 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:59.812 lat (usec) : 250=4.13%, 500=72.82%, 750=22.41%, 1000=0.45% 00:10:59.812 lat (msec) : 2=0.14%, 4=0.05% 00:10:59.812 cpu : usr=1.80%, sys=5.70%, ctx=2208, majf=0, minf=11 00:10:59.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.812 issued rwts: total=1024,1180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.812 00:10:59.812 Run status group 0 (all jobs): 00:10:59.812 READ: bw=18.0MiB/s (18.9MB/s), 3465KiB/s-6777KiB/s (3548kB/s-6940kB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:10:59.812 WRITE: bw=21.7MiB/s (22.7MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=21.7MiB (22.8MB), run=1001-1001msec 00:10:59.812 00:10:59.812 Disk stats (read/write): 00:10:59.812 nvme0n1: ios=1586/1692, merge=0/0, ticks=459/372, in_queue=831, util=88.48% 00:10:59.812 nvme0n2: ios=723/1024, merge=0/0, ticks=390/419, in_queue=809, util=89.51% 00:10:59.812 nvme0n3: ios=983/1024, merge=0/0, ticks=466/337, in_queue=803, util=89.41% 00:10:59.812 nvme0n4: ios=892/1024, merge=0/0, ticks=411/381, in_queue=792, util=89.65% 00:10:59.812 09:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:59.812 [global] 00:10:59.812 thread=1 00:10:59.812 invalidate=1 00:10:59.812 rw=write 00:10:59.812 time_based=1 00:10:59.812 runtime=1 00:10:59.812 ioengine=libaio 00:10:59.812 direct=1 00:10:59.812 bs=4096 00:10:59.812 iodepth=128 00:10:59.812 norandommap=0 00:10:59.812 numjobs=1 00:10:59.812 00:10:59.812 verify_dump=1 00:10:59.812 verify_backlog=512 00:10:59.812 verify_state_save=0 00:10:59.812 do_verify=1 00:10:59.812 verify=crc32c-intel 00:10:59.812 [job0] 00:10:59.812 filename=/dev/nvme0n1 00:10:59.812 [job1] 00:10:59.812 filename=/dev/nvme0n2 00:10:59.812 [job2] 00:10:59.812 filename=/dev/nvme0n3 00:10:59.812 [job3] 00:10:59.812 filename=/dev/nvme0n4 00:10:59.812 Could not set queue depth (nvme0n1) 00:10:59.812 Could not set queue depth (nvme0n2) 00:10:59.812 Could not set queue depth (nvme0n3) 00:10:59.812 Could not set queue depth (nvme0n4) 00:10:59.812 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.812 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.813 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.813 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.813 fio-3.35 00:10:59.813 Starting 4 threads 00:11:01.188 00:11:01.188 job0: (groupid=0, jobs=1): err= 0: pid=66393: Fri Dec 6 09:48:26 2024 00:11:01.188 read: IOPS=2205, BW=8821KiB/s (9032kB/s)(8856KiB/1004msec) 00:11:01.188 slat (usec): min=6, max=11076, avg=221.51, stdev=945.53 00:11:01.188 clat (usec): min=2811, max=42080, avg=28224.62, stdev=6619.28 00:11:01.188 lat (usec): min=4302, max=42092, avg=28446.12, stdev=6607.00 00:11:01.188 clat percentiles (usec): 00:11:01.188 | 1.00th=[11600], 5.00th=[16909], 10.00th=[17695], 20.00th=[19792], 00:11:01.188 | 30.00th=[27395], 40.00th=[29492], 50.00th=[30802], 60.00th=[31589], 00:11:01.188 | 70.00th=[32113], 80.00th=[32900], 90.00th=[34341], 95.00th=[36439], 00:11:01.188 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:01.188 | 99.99th=[42206] 00:11:01.188 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:11:01.188 slat (usec): min=11, max=11632, avg=190.91, stdev=862.83 00:11:01.188 clat (usec): min=11917, max=36538, avg=25087.96, stdev=6508.98 00:11:01.188 lat (usec): min=12384, max=36561, avg=25278.87, stdev=6514.93 00:11:01.188 clat percentiles (usec): 00:11:01.189 | 1.00th=[14746], 5.00th=[16188], 10.00th=[16581], 20.00th=[17695], 00:11:01.189 | 30.00th=[18482], 40.00th=[22414], 50.00th=[27395], 60.00th=[28443], 00:11:01.189 | 70.00th=[29492], 80.00th=[31851], 90.00th=[32900], 95.00th=[33817], 00:11:01.189 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:11:01.189 | 99.99th=[36439] 00:11:01.189 bw ( KiB/s): min= 8192, max=12288, per=22.57%, avg=10240.00, stdev=2896.31, samples=2 00:11:01.189 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:01.189 lat (msec) : 4=0.02%, 10=0.19%, 20=28.22%, 50=71.58% 00:11:01.189 cpu : usr=3.09%, sys=7.28%, ctx=431, majf=0, minf=8 00:11:01.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:01.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.189 issued rwts: total=2214,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.189 job1: (groupid=0, jobs=1): err= 0: pid=66394: Fri Dec 6 09:48:26 2024 00:11:01.189 read: IOPS=2322, BW=9288KiB/s (9511kB/s)(9316KiB/1003msec) 00:11:01.189 slat (usec): min=4, max=11543, avg=214.86, stdev=948.68 00:11:01.189 clat (usec): min=2467, max=44912, avg=27125.19, stdev=7550.30 00:11:01.189 lat (usec): min=2489, max=44940, avg=27340.05, stdev=7550.17 00:11:01.189 clat percentiles (usec): 00:11:01.189 | 1.00th=[10290], 5.00th=[15795], 10.00th=[16581], 20.00th=[18220], 00:11:01.189 | 30.00th=[23462], 40.00th=[28181], 50.00th=[29754], 60.00th=[31065], 00:11:01.189 | 70.00th=[31589], 80.00th=[32637], 90.00th=[33817], 95.00th=[38011], 00:11:01.189 | 99.00th=[42730], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:01.189 | 99.99th=[44827] 00:11:01.189 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:11:01.189 slat (usec): min=12, max=9775, avg=186.39, stdev=825.06 00:11:01.189 clat (usec): min=10459, max=41272, avg=24721.56, stdev=7353.05 00:11:01.189 lat (usec): min=13597, max=41296, avg=24907.95, stdev=7362.88 00:11:01.189 clat percentiles (usec): 00:11:01.189 | 1.00th=[13566], 5.00th=[15401], 10.00th=[15795], 20.00th=[16581], 00:11:01.189 | 30.00th=[17433], 40.00th=[18744], 50.00th=[27132], 60.00th=[28443], 00:11:01.189 | 70.00th=[29754], 80.00th=[31589], 90.00th=[33424], 95.00th=[35914], 00:11:01.189 | 99.00th=[38536], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:01.189 | 99.99th=[41157] 00:11:01.189 bw ( KiB/s): min= 8192, max=12288, per=22.57%, avg=10240.00, stdev=2896.31, samples=2 00:11:01.189 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:01.189 lat (msec) : 4=0.41%, 10=0.02%, 20=32.85%, 50=66.72% 00:11:01.189 cpu : usr=2.40%, sys=8.68%, ctx=428, majf=0, minf=11 00:11:01.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:01.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.189 issued rwts: total=2329,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.189 job2: (groupid=0, jobs=1): err= 0: pid=66395: Fri Dec 6 09:48:26 2024 00:11:01.189 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:11:01.189 slat (usec): min=6, max=7181, avg=187.58, stdev=790.62 00:11:01.189 clat (usec): min=18767, max=31832, avg=24371.93, stdev=1984.37 00:11:01.189 lat (usec): min=18800, max=31903, avg=24559.52, stdev=2073.88 00:11:01.189 clat percentiles (usec): 00:11:01.189 | 1.00th=[19792], 5.00th=[21103], 10.00th=[22152], 20.00th=[22938], 00:11:01.189 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:11:01.189 | 70.00th=[24773], 80.00th=[25297], 90.00th=[27395], 95.00th=[28181], 00:11:01.189 | 99.00th=[29492], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:11:01.189 | 99.99th=[31851] 00:11:01.189 write: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1005msec); 0 zone resets 00:11:01.189 slat (usec): min=11, max=8142, avg=184.54, stdev=909.91 00:11:01.189 clat (usec): min=220, max=34272, avg=23756.92, stdev=3473.11 00:11:01.189 lat (usec): min=4553, max=34326, avg=23941.46, stdev=3563.06 00:11:01.189 clat percentiles (usec): 00:11:01.189 | 1.00th=[ 5407], 5.00th=[19792], 10.00th=[21627], 20.00th=[22414], 00:11:01.189 | 30.00th=[22676], 40.00th=[23462], 50.00th=[23987], 60.00th=[24511], 00:11:01.189 | 70.00th=[25035], 80.00th=[25822], 90.00th=[27132], 95.00th=[27919], 00:11:01.189 | 99.00th=[31851], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:11:01.189 | 99.99th=[34341] 00:11:01.189 bw ( KiB/s): min= 9008, max=11599, per=22.71%, avg=10303.50, stdev=1832.11, samples=2 00:11:01.189 iops : min= 2252, max= 2899, avg=2575.50, stdev=457.50, samples=2 00:11:01.189 lat (usec) : 250=0.02% 00:11:01.189 lat (msec) : 10=0.80%, 20=2.97%, 50=96.21% 00:11:01.189 cpu : usr=3.29%, sys=9.16%, ctx=248, majf=0, minf=9 00:11:01.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:01.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.189 issued rwts: total=2560,2693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.189 job3: (groupid=0, jobs=1): err= 0: pid=66396: Fri Dec 6 09:48:26 2024 00:11:01.189 read: IOPS=3378, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1003msec) 00:11:01.189 slat (usec): min=5, max=6291, avg=138.21, stdev=566.96 00:11:01.189 clat (usec): min=721, max=24787, avg=18248.40, stdev=2261.08 00:11:01.189 lat (usec): min=3484, max=25198, avg=18386.61, stdev=2303.52 00:11:01.189 clat percentiles (usec): 00:11:01.189 | 1.00th=[ 6456], 5.00th=[15664], 10.00th=[16712], 20.00th=[17433], 00:11:01.189 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 00:11:01.189 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20579], 95.00th=[21103], 00:11:01.189 | 99.00th=[22676], 99.50th=[23200], 99.90th=[24773], 99.95th=[24773], 00:11:01.189 | 99.99th=[24773] 00:11:01.189 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:11:01.189 slat (usec): min=12, max=6269, avg=139.00, stdev=707.40 00:11:01.189 clat (usec): min=13563, max=25831, avg=18013.78, stdev=1751.95 00:11:01.189 lat (usec): min=13596, max=25885, avg=18152.79, stdev=1875.51 00:11:01.189 clat percentiles (usec): 00:11:01.189 | 1.00th=[14615], 5.00th=[15664], 10.00th=[16057], 20.00th=[16581], 00:11:01.189 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:11:01.189 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20317], 95.00th=[20841], 00:11:01.189 | 99.00th=[23987], 99.50th=[24773], 99.90th=[25035], 99.95th=[25560], 00:11:01.189 | 99.99th=[25822] 00:11:01.189 bw ( KiB/s): min=14328, max=14344, per=31.60%, avg=14336.00, stdev=11.31, samples=2 00:11:01.189 iops : min= 3582, max= 3586, avg=3584.00, stdev= 2.83, samples=2 00:11:01.189 lat (usec) : 750=0.01% 00:11:01.189 lat (msec) : 4=0.29%, 10=0.63%, 20=85.62%, 50=13.45% 00:11:01.189 cpu : usr=3.59%, sys=12.08%, ctx=249, majf=0, minf=11 00:11:01.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:01.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.189 issued rwts: total=3389,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.189 00:11:01.189 Run status group 0 (all jobs): 00:11:01.189 READ: bw=40.8MiB/s (42.8MB/s), 8821KiB/s-13.2MiB/s (9032kB/s-13.8MB/s), io=41.0MiB (43.0MB), run=1003-1005msec 00:11:01.189 WRITE: bw=44.3MiB/s (46.4MB/s), 9.96MiB/s-14.0MiB/s (10.4MB/s-14.6MB/s), io=44.5MiB (46.7MB), run=1003-1005msec 00:11:01.189 00:11:01.189 Disk stats (read/write): 00:11:01.189 nvme0n1: ios=2097/2136, merge=0/0, ticks=13510/10905, in_queue=24415, util=86.85% 00:11:01.189 nvme0n2: ios=2068/2244, merge=0/0, ticks=13279/11398, in_queue=24677, util=87.76% 00:11:01.189 nvme0n3: ios=2048/2386, merge=0/0, ticks=16214/17312, in_queue=33526, util=88.97% 00:11:01.189 nvme0n4: ios=2810/3072, merge=0/0, ticks=16731/15950, in_queue=32681, util=89.53% 00:11:01.189 09:48:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:01.189 [global] 00:11:01.189 thread=1 00:11:01.189 invalidate=1 00:11:01.189 rw=randwrite 00:11:01.189 time_based=1 00:11:01.189 runtime=1 00:11:01.189 ioengine=libaio 00:11:01.189 direct=1 00:11:01.189 bs=4096 00:11:01.189 iodepth=128 00:11:01.189 norandommap=0 00:11:01.189 numjobs=1 00:11:01.189 00:11:01.189 verify_dump=1 00:11:01.189 verify_backlog=512 00:11:01.189 verify_state_save=0 00:11:01.189 do_verify=1 00:11:01.189 verify=crc32c-intel 00:11:01.189 [job0] 00:11:01.189 filename=/dev/nvme0n1 00:11:01.189 [job1] 00:11:01.189 filename=/dev/nvme0n2 00:11:01.189 [job2] 00:11:01.189 filename=/dev/nvme0n3 00:11:01.189 [job3] 00:11:01.189 filename=/dev/nvme0n4 00:11:01.189 Could not set queue depth (nvme0n1) 00:11:01.189 Could not set queue depth (nvme0n2) 00:11:01.189 Could not set queue depth (nvme0n3) 00:11:01.189 Could not set queue depth (nvme0n4) 00:11:01.189 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.189 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.189 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.190 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.190 fio-3.35 00:11:01.190 Starting 4 threads 00:11:02.570 00:11:02.570 job0: (groupid=0, jobs=1): err= 0: pid=66453: Fri Dec 6 09:48:27 2024 00:11:02.570 read: IOPS=2659, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1005msec) 00:11:02.570 slat (usec): min=8, max=9843, avg=165.44, stdev=712.11 00:11:02.570 clat (usec): min=2580, max=56101, avg=20416.31, stdev=4030.92 00:11:02.570 lat (usec): min=6024, max=56159, avg=20581.75, stdev=4091.94 00:11:02.570 clat percentiles (usec): 00:11:02.570 | 1.00th=[11469], 5.00th=[17171], 10.00th=[17695], 20.00th=[18482], 00:11:02.570 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20317], 60.00th=[20579], 00:11:02.570 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22676], 95.00th=[24249], 00:11:02.570 | 99.00th=[45876], 99.50th=[46400], 99.90th=[55837], 99.95th=[55837], 00:11:02.570 | 99.99th=[56361] 00:11:02.570 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:11:02.570 slat (usec): min=6, max=13820, avg=172.87, stdev=913.07 00:11:02.570 clat (usec): min=14818, max=62547, avg=23504.21, stdev=9937.50 00:11:02.570 lat (usec): min=14852, max=62581, avg=23677.08, stdev=10000.30 00:11:02.570 clat percentiles (usec): 00:11:02.570 | 1.00th=[15664], 5.00th=[16909], 10.00th=[17171], 20.00th=[18482], 00:11:02.570 | 30.00th=[19006], 40.00th=[19530], 50.00th=[19792], 60.00th=[20317], 00:11:02.570 | 70.00th=[21103], 80.00th=[22414], 90.00th=[44303], 95.00th=[47973], 00:11:02.570 | 99.00th=[57410], 99.50th=[62129], 99.90th=[62653], 99.95th=[62653], 00:11:02.570 | 99.99th=[62653] 00:11:02.570 bw ( KiB/s): min=11792, max=12664, per=27.56%, avg=12228.00, stdev=616.60, samples=2 00:11:02.570 iops : min= 2948, max= 3166, avg=3057.00, stdev=154.15, samples=2 00:11:02.570 lat (msec) : 4=0.02%, 10=0.30%, 20=50.50%, 50=46.93%, 100=2.26% 00:11:02.570 cpu : usr=4.08%, sys=9.26%, ctx=283, majf=0, minf=1 00:11:02.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:02.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.570 issued rwts: total=2673,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.570 job1: (groupid=0, jobs=1): err= 0: pid=66454: Fri Dec 6 09:48:27 2024 00:11:02.570 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:11:02.570 slat (usec): min=5, max=7551, avg=230.36, stdev=903.88 00:11:02.570 clat (usec): min=11960, max=44075, avg=29770.52, stdev=9192.00 00:11:02.570 lat (usec): min=11996, max=44090, avg=30000.88, stdev=9248.14 00:11:02.570 clat percentiles (usec): 00:11:02.570 | 1.00th=[12780], 5.00th=[14877], 10.00th=[15401], 20.00th=[15926], 00:11:02.570 | 30.00th=[29754], 40.00th=[32900], 50.00th=[33817], 60.00th=[34341], 00:11:02.570 | 70.00th=[35390], 80.00th=[37487], 90.00th=[38536], 95.00th=[39584], 00:11:02.570 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:11:02.570 | 99.99th=[44303] 00:11:02.570 write: IOPS=2431, BW=9727KiB/s (9961kB/s)(9776KiB/1005msec); 0 zone resets 00:11:02.570 slat (usec): min=11, max=9010, avg=206.98, stdev=1058.35 00:11:02.570 clat (usec): min=4380, max=44110, avg=26580.99, stdev=8680.67 00:11:02.570 lat (usec): min=4404, max=44140, avg=26787.97, stdev=8712.84 00:11:02.570 clat percentiles (usec): 00:11:02.570 | 1.00th=[ 7439], 5.00th=[14615], 10.00th=[15401], 20.00th=[16909], 00:11:02.570 | 30.00th=[17695], 40.00th=[26346], 50.00th=[30802], 60.00th=[31851], 00:11:02.570 | 70.00th=[32900], 80.00th=[34341], 90.00th=[35390], 95.00th=[35914], 00:11:02.570 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:11:02.570 | 99.99th=[44303] 00:11:02.570 bw ( KiB/s): min= 8192, max=10344, per=20.89%, avg=9268.00, stdev=1521.69, samples=2 00:11:02.570 iops : min= 2048, max= 2586, avg=2317.00, stdev=380.42, samples=2 00:11:02.570 lat (msec) : 10=1.02%, 20=30.88%, 50=68.10% 00:11:02.570 cpu : usr=2.49%, sys=7.57%, ctx=418, majf=0, minf=4 00:11:02.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:02.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.570 issued rwts: total=2048,2444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.570 job2: (groupid=0, jobs=1): err= 0: pid=66455: Fri Dec 6 09:48:27 2024 00:11:02.570 read: IOPS=3379, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1004msec) 00:11:02.570 slat (usec): min=8, max=5993, avg=141.31, stdev=692.26 00:11:02.570 clat (usec): min=415, max=22648, avg=18247.89, stdev=2100.37 00:11:02.570 lat (usec): min=4824, max=22679, avg=18389.19, stdev=1990.64 00:11:02.570 clat percentiles (usec): 00:11:02.570 | 1.00th=[ 9765], 5.00th=[15139], 10.00th=[16909], 20.00th=[17433], 00:11:02.570 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:11:02.570 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20579], 95.00th=[20841], 00:11:02.570 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22676], 99.95th=[22676], 00:11:02.570 | 99.99th=[22676] 00:11:02.570 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:02.570 slat (usec): min=13, max=6095, avg=136.70, stdev=618.28 00:11:02.570 clat (usec): min=12364, max=22617, avg=18010.05, stdev=1363.05 00:11:02.570 lat (usec): min=14970, max=22661, avg=18146.76, stdev=1218.09 00:11:02.570 clat percentiles (usec): 00:11:02.570 | 1.00th=[14222], 5.00th=[15795], 10.00th=[16188], 20.00th=[17171], 00:11:02.570 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:11:02.570 | 70.00th=[18744], 80.00th=[18744], 90.00th=[19268], 95.00th=[20579], 00:11:02.570 | 99.00th=[21627], 99.50th=[22414], 99.90th=[22676], 99.95th=[22676], 00:11:02.570 | 99.99th=[22676] 00:11:02.570 bw ( KiB/s): min=13560, max=15142, per=32.34%, avg=14351.00, stdev=1118.64, samples=2 00:11:02.570 iops : min= 3390, max= 3785, avg=3587.50, stdev=279.31, samples=2 00:11:02.570 lat (usec) : 500=0.01% 00:11:02.570 lat (msec) : 10=0.57%, 20=89.38%, 50=10.03% 00:11:02.570 cpu : usr=3.79%, sys=11.76%, ctx=219, majf=0, minf=2 00:11:02.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:02.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.570 issued rwts: total=3393,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.570 job3: (groupid=0, jobs=1): err= 0: pid=66456: Fri Dec 6 09:48:27 2024 00:11:02.570 read: IOPS=1571, BW=6285KiB/s (6435kB/s)(6316KiB/1005msec) 00:11:02.570 slat (usec): min=10, max=8380, avg=272.41, stdev=1058.96 00:11:02.570 clat (usec): min=2004, max=44460, avg=34271.98, stdev=4917.78 00:11:02.570 lat (usec): min=6783, max=44474, avg=34544.40, stdev=4848.52 00:11:02.570 clat percentiles (usec): 00:11:02.570 | 1.00th=[12256], 5.00th=[27657], 10.00th=[29754], 20.00th=[32637], 00:11:02.570 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34341], 60.00th=[35390], 00:11:02.570 | 70.00th=[36439], 80.00th=[37487], 90.00th=[38536], 95.00th=[39584], 00:11:02.570 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:11:02.570 | 99.99th=[44303] 00:11:02.570 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:11:02.570 slat (usec): min=14, max=12199, avg=270.77, stdev=1384.19 00:11:02.570 clat (usec): min=17321, max=55868, avg=35008.75, stdev=5749.48 00:11:02.570 lat (usec): min=17349, max=55901, avg=35279.51, stdev=5689.56 00:11:02.570 clat percentiles (usec): 00:11:02.570 | 1.00th=[23987], 5.00th=[26084], 10.00th=[29492], 20.00th=[31065], 00:11:02.570 | 30.00th=[32637], 40.00th=[33162], 50.00th=[33817], 60.00th=[34341], 00:11:02.570 | 70.00th=[35390], 80.00th=[38536], 90.00th=[44827], 95.00th=[46400], 00:11:02.570 | 99.00th=[49021], 99.50th=[49021], 99.90th=[55837], 99.95th=[55837], 00:11:02.570 | 99.99th=[55837] 00:11:02.570 bw ( KiB/s): min= 7512, max= 8208, per=17.71%, avg=7860.00, stdev=492.15, samples=2 00:11:02.570 iops : min= 1878, max= 2052, avg=1965.00, stdev=123.04, samples=2 00:11:02.570 lat (msec) : 4=0.03%, 10=0.19%, 20=1.16%, 50=98.46%, 100=0.17% 00:11:02.570 cpu : usr=1.39%, sys=7.17%, ctx=288, majf=0, minf=7 00:11:02.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:11:02.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.570 issued rwts: total=1579,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.570 00:11:02.570 Run status group 0 (all jobs): 00:11:02.570 READ: bw=37.7MiB/s (39.5MB/s), 6285KiB/s-13.2MiB/s (6435kB/s-13.8MB/s), io=37.9MiB (39.7MB), run=1004-1005msec 00:11:02.570 WRITE: bw=43.3MiB/s (45.4MB/s), 8151KiB/s-13.9MiB/s (8347kB/s-14.6MB/s), io=43.5MiB (45.7MB), run=1004-1005msec 00:11:02.570 00:11:02.570 Disk stats (read/write): 00:11:02.570 nvme0n1: ios=2610/2774, merge=0/0, ticks=16523/16499, in_queue=33022, util=88.37% 00:11:02.570 nvme0n2: ios=1585/1822, merge=0/0, ticks=17225/16343, in_queue=33568, util=88.87% 00:11:02.570 nvme0n3: ios=2982/3072, merge=0/0, ticks=12572/12087, in_queue=24659, util=89.19% 00:11:02.570 nvme0n4: ios=1536/1646, merge=0/0, ticks=17245/16221, in_queue=33466, util=89.32% 00:11:02.570 09:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:02.570 09:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66471 00:11:02.570 09:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:02.570 09:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:02.570 [global] 00:11:02.571 thread=1 00:11:02.571 invalidate=1 00:11:02.571 rw=read 00:11:02.571 time_based=1 00:11:02.571 runtime=10 00:11:02.571 ioengine=libaio 00:11:02.571 direct=1 00:11:02.571 bs=4096 00:11:02.571 iodepth=1 00:11:02.571 norandommap=1 00:11:02.571 numjobs=1 00:11:02.571 00:11:02.571 [job0] 00:11:02.571 filename=/dev/nvme0n1 00:11:02.571 [job1] 00:11:02.571 filename=/dev/nvme0n2 00:11:02.571 [job2] 00:11:02.571 filename=/dev/nvme0n3 00:11:02.571 [job3] 00:11:02.571 filename=/dev/nvme0n4 00:11:02.571 Could not set queue depth (nvme0n1) 00:11:02.571 Could not set queue depth (nvme0n2) 00:11:02.571 Could not set queue depth (nvme0n3) 00:11:02.571 Could not set queue depth (nvme0n4) 00:11:02.571 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.571 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.571 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.571 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.571 fio-3.35 00:11:02.571 Starting 4 threads 00:11:05.852 09:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:05.852 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=32784384, buflen=4096 00:11:05.852 fio: pid=66520, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.852 09:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:06.110 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48074752, buflen=4096 00:11:06.110 fio: pid=66519, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:06.110 09:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.110 09:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:06.368 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43679744, buflen=4096 00:11:06.368 fio: pid=66517, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:06.368 09:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.368 09:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:06.627 fio: pid=66518, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:06.627 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54992896, buflen=4096 00:11:06.627 00:11:06.627 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66517: Fri Dec 6 09:48:31 2024 00:11:06.627 read: IOPS=2983, BW=11.7MiB/s (12.2MB/s)(41.7MiB/3575msec) 00:11:06.627 slat (usec): min=13, max=10747, avg=23.41, stdev=161.74 00:11:06.627 clat (usec): min=125, max=2939, avg=309.42, stdev=89.25 00:11:06.627 lat (usec): min=140, max=11013, avg=332.83, stdev=185.36 00:11:06.627 clat percentiles (usec): 00:11:06.627 | 1.00th=[ 178], 5.00th=[ 198], 10.00th=[ 212], 20.00th=[ 239], 00:11:06.627 | 30.00th=[ 265], 40.00th=[ 285], 50.00th=[ 306], 60.00th=[ 322], 00:11:06.627 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 437], 00:11:06.627 | 99.00th=[ 529], 99.50th=[ 578], 99.90th=[ 1090], 99.95th=[ 1156], 00:11:06.627 | 99.99th=[ 2180] 00:11:06.627 bw ( KiB/s): min= 9752, max=11808, per=24.41%, avg=11137.33, stdev=885.23, samples=6 00:11:06.627 iops : min= 2438, max= 2952, avg=2784.33, stdev=221.31, samples=6 00:11:06.627 lat (usec) : 250=23.92%, 500=74.30%, 750=1.56%, 1000=0.08% 00:11:06.627 lat (msec) : 2=0.10%, 4=0.03% 00:11:06.627 cpu : usr=1.29%, sys=5.26%, ctx=10670, majf=0, minf=1 00:11:06.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.627 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.627 issued rwts: total=10665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.627 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66518: Fri Dec 6 09:48:31 2024 00:11:06.627 read: IOPS=3494, BW=13.7MiB/s (14.3MB/s)(52.4MiB/3842msec) 00:11:06.627 slat (usec): min=11, max=12849, avg=20.90, stdev=201.61 00:11:06.627 clat (usec): min=132, max=2705, avg=263.34, stdev=61.33 00:11:06.627 lat (usec): min=157, max=13099, avg=284.24, stdev=210.56 00:11:06.627 clat percentiles (usec): 00:11:06.627 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 200], 20.00th=[ 219], 00:11:06.627 | 30.00th=[ 233], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 273], 00:11:06.627 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 351], 00:11:06.627 | 99.00th=[ 392], 99.50th=[ 412], 99.90th=[ 652], 99.95th=[ 1004], 00:11:06.627 | 99.99th=[ 2057] 00:11:06.627 bw ( KiB/s): min=12624, max=14910, per=29.82%, avg=13607.71, stdev=907.37, samples=7 00:11:06.627 iops : min= 3156, max= 3727, avg=3401.86, stdev=226.72, samples=7 00:11:06.627 lat (usec) : 250=42.59%, 500=57.24%, 750=0.08%, 1000=0.03% 00:11:06.627 lat (msec) : 2=0.04%, 4=0.01% 00:11:06.627 cpu : usr=1.28%, sys=5.10%, ctx=13445, majf=0, minf=1 00:11:06.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.627 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.627 issued rwts: total=13427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.627 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66519: Fri Dec 6 09:48:31 2024 00:11:06.627 read: IOPS=3655, BW=14.3MiB/s (15.0MB/s)(45.8MiB/3211msec) 00:11:06.627 slat (usec): min=10, max=13842, avg=19.26, stdev=146.31 00:11:06.627 clat (usec): min=156, max=2829, avg=252.69, stdev=58.37 00:11:06.627 lat (usec): min=169, max=14113, avg=271.95, stdev=157.72 00:11:06.627 clat percentiles (usec): 00:11:06.627 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 217], 00:11:06.627 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 258], 00:11:06.627 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 326], 00:11:06.627 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 482], 99.95th=[ 1057], 00:11:06.627 | 99.99th=[ 2442] 00:11:06.627 bw ( KiB/s): min=13704, max=15160, per=32.22%, avg=14702.67, stdev=534.83, samples=6 00:11:06.627 iops : min= 3426, max= 3790, avg=3675.67, stdev=133.71, samples=6 00:11:06.627 lat (usec) : 250=52.85%, 500=47.04%, 750=0.03%, 1000=0.02% 00:11:06.627 lat (msec) : 2=0.03%, 4=0.03% 00:11:06.627 cpu : usr=1.53%, sys=5.14%, ctx=11740, majf=0, minf=1 00:11:06.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.627 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.627 issued rwts: total=11738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.627 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66520: Fri Dec 6 09:48:31 2024 00:11:06.627 read: IOPS=2758, BW=10.8MiB/s (11.3MB/s)(31.3MiB/2902msec) 00:11:06.627 slat (nsec): min=14687, max=98575, avg=21303.13, stdev=7046.76 00:11:06.627 clat (usec): min=172, max=2311, avg=338.49, stdev=77.92 00:11:06.627 lat (usec): min=191, max=2343, avg=359.79, stdev=79.78 00:11:06.627 clat percentiles (usec): 00:11:06.627 | 1.00th=[ 235], 5.00th=[ 255], 10.00th=[ 269], 20.00th=[ 285], 00:11:06.627 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 343], 00:11:06.627 | 70.00th=[ 363], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 457], 00:11:06.627 | 99.00th=[ 553], 99.50th=[ 619], 99.90th=[ 873], 99.95th=[ 1287], 00:11:06.627 | 99.99th=[ 2311] 00:11:06.627 bw ( KiB/s): min= 9728, max=11832, per=24.75%, avg=11296.00, stdev=883.95, samples=5 00:11:06.627 iops : min= 2432, max= 2958, avg=2824.00, stdev=220.99, samples=5 00:11:06.627 lat (usec) : 250=3.26%, 500=94.04%, 750=2.51%, 1000=0.11% 00:11:06.627 lat (msec) : 2=0.02%, 4=0.04% 00:11:06.627 cpu : usr=1.07%, sys=5.27%, ctx=8006, majf=0, minf=2 00:11:06.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.627 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.627 issued rwts: total=8005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.627 00:11:06.627 Run status group 0 (all jobs): 00:11:06.627 READ: bw=44.6MiB/s (46.7MB/s), 10.8MiB/s-14.3MiB/s (11.3MB/s-15.0MB/s), io=171MiB (180MB), run=2902-3842msec 00:11:06.627 00:11:06.627 Disk stats (read/write): 00:11:06.627 nvme0n1: ios=9787/0, merge=0/0, ticks=3138/0, in_queue=3138, util=95.39% 00:11:06.627 nvme0n2: ios=12328/0, merge=0/0, ticks=3380/0, in_queue=3380, util=95.43% 00:11:06.627 nvme0n3: ios=11402/0, merge=0/0, ticks=2914/0, in_queue=2914, util=96.08% 00:11:06.627 nvme0n4: ios=7929/0, merge=0/0, ticks=2704/0, in_queue=2704, util=96.76% 00:11:06.627 09:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.627 09:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:06.887 09:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.887 09:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:07.456 09:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.456 09:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:07.714 09:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.714 09:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:07.974 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.974 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66471 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.234 nvmf hotplug test: fio failed as expected 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:08.234 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.493 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.752 rmmod nvme_tcp 00:11:08.752 rmmod nvme_fabrics 00:11:08.752 rmmod nvme_keyring 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:08.752 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66091 ']' 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66091 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66091 ']' 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66091 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66091 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.753 killing process with pid 66091 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66091' 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66091 00:11:08.753 09:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66091 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:09.012 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:09.271 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.271 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.271 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:09.271 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.271 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:09.272 00:11:09.272 real 0m20.186s 00:11:09.272 user 1m16.190s 00:11:09.272 sys 0m9.701s 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.272 ************************************ 00:11:09.272 END TEST nvmf_fio_target 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.272 ************************************ 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.272 ************************************ 00:11:09.272 START TEST nvmf_bdevio 00:11:09.272 ************************************ 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:09.272 * Looking for test storage... 00:11:09.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.272 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.533 --rc genhtml_branch_coverage=1 00:11:09.533 --rc genhtml_function_coverage=1 00:11:09.533 --rc genhtml_legend=1 00:11:09.533 --rc geninfo_all_blocks=1 00:11:09.533 --rc geninfo_unexecuted_blocks=1 00:11:09.533 00:11:09.533 ' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.533 --rc genhtml_branch_coverage=1 00:11:09.533 --rc genhtml_function_coverage=1 00:11:09.533 --rc genhtml_legend=1 00:11:09.533 --rc geninfo_all_blocks=1 00:11:09.533 --rc geninfo_unexecuted_blocks=1 00:11:09.533 00:11:09.533 ' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.533 --rc genhtml_branch_coverage=1 00:11:09.533 --rc genhtml_function_coverage=1 00:11:09.533 --rc genhtml_legend=1 00:11:09.533 --rc geninfo_all_blocks=1 00:11:09.533 --rc geninfo_unexecuted_blocks=1 00:11:09.533 00:11:09.533 ' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.533 --rc genhtml_branch_coverage=1 00:11:09.533 --rc genhtml_function_coverage=1 00:11:09.533 --rc genhtml_legend=1 00:11:09.533 --rc geninfo_all_blocks=1 00:11:09.533 --rc geninfo_unexecuted_blocks=1 00:11:09.533 00:11:09.533 ' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.533 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.534 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:09.534 Cannot find device "nvmf_init_br" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:09.534 Cannot find device "nvmf_init_br2" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:09.534 Cannot find device "nvmf_tgt_br" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.534 Cannot find device "nvmf_tgt_br2" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:09.534 Cannot find device "nvmf_init_br" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:09.534 Cannot find device "nvmf_init_br2" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:09.534 Cannot find device "nvmf_tgt_br" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:09.534 Cannot find device "nvmf_tgt_br2" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:09.534 Cannot find device "nvmf_br" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:09.534 Cannot find device "nvmf_init_if" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:09.534 Cannot find device "nvmf_init_if2" 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:09.534 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.793 09:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.793 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:09.793 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:09.793 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.793 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:09.793 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.793 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:10.066 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:10.066 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:11:10.066 00:11:10.066 --- 10.0.0.3 ping statistics --- 00:11:10.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.066 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:10.066 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:10.066 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:11:10.066 00:11:10.066 --- 10.0.0.4 ping statistics --- 00:11:10.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.066 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:10.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:10.066 00:11:10.066 --- 10.0.0.1 ping statistics --- 00:11:10.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.066 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:10.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:10.066 00:11:10.066 --- 10.0.0.2 ping statistics --- 00:11:10.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.066 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66844 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66844 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66844 ']' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.066 09:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.066 [2024-12-06 09:48:35.207297] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:10.066 [2024-12-06 09:48:35.207406] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.325 [2024-12-06 09:48:35.369611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.325 [2024-12-06 09:48:35.441956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.325 [2024-12-06 09:48:35.442040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.325 [2024-12-06 09:48:35.442054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.325 [2024-12-06 09:48:35.442065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.325 [2024-12-06 09:48:35.442074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.325 [2024-12-06 09:48:35.443747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:10.325 [2024-12-06 09:48:35.443866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:10.325 [2024-12-06 09:48:35.444034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:10.325 [2024-12-06 09:48:35.444041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.325 [2024-12-06 09:48:35.506113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.260 [2024-12-06 09:48:36.249396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.260 Malloc0 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.260 [2024-12-06 09:48:36.327968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:11.260 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:11.261 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:11.261 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:11.261 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:11.261 { 00:11:11.261 "params": { 00:11:11.261 "name": "Nvme$subsystem", 00:11:11.261 "trtype": "$TEST_TRANSPORT", 00:11:11.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.261 "adrfam": "ipv4", 00:11:11.261 "trsvcid": "$NVMF_PORT", 00:11:11.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.261 "hdgst": ${hdgst:-false}, 00:11:11.261 "ddgst": ${ddgst:-false} 00:11:11.261 }, 00:11:11.261 "method": "bdev_nvme_attach_controller" 00:11:11.261 } 00:11:11.261 EOF 00:11:11.261 )") 00:11:11.261 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:11.261 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:11.261 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:11.261 09:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:11.261 "params": { 00:11:11.261 "name": "Nvme1", 00:11:11.261 "trtype": "tcp", 00:11:11.261 "traddr": "10.0.0.3", 00:11:11.261 "adrfam": "ipv4", 00:11:11.261 "trsvcid": "4420", 00:11:11.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.261 "hdgst": false, 00:11:11.261 "ddgst": false 00:11:11.261 }, 00:11:11.261 "method": "bdev_nvme_attach_controller" 00:11:11.261 }' 00:11:11.261 [2024-12-06 09:48:36.383060] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:11.261 [2024-12-06 09:48:36.383155] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66881 ] 00:11:11.519 [2024-12-06 09:48:36.532274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.520 [2024-12-06 09:48:36.597391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.520 [2024-12-06 09:48:36.597522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.520 [2024-12-06 09:48:36.597523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.520 [2024-12-06 09:48:36.667565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.779 I/O targets: 00:11:11.779 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:11.779 00:11:11.779 00:11:11.779 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.779 http://cunit.sourceforge.net/ 00:11:11.779 00:11:11.779 00:11:11.779 Suite: bdevio tests on: Nvme1n1 00:11:11.779 Test: blockdev write read block ...passed 00:11:11.779 Test: blockdev write zeroes read block ...passed 00:11:11.779 Test: blockdev write zeroes read no split ...passed 00:11:11.779 Test: blockdev write zeroes read split ...passed 00:11:11.779 Test: blockdev write zeroes read split partial ...passed 00:11:11.779 Test: blockdev reset ...[2024-12-06 09:48:36.826573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:11.779 [2024-12-06 09:48:36.826675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc50b80 (9): Bad file descriptor 00:11:11.779 [2024-12-06 09:48:36.840357] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:11.779 passed 00:11:11.779 Test: blockdev write read 8 blocks ...passed 00:11:11.779 Test: blockdev write read size > 128k ...passed 00:11:11.779 Test: blockdev write read invalid size ...passed 00:11:11.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.779 Test: blockdev write read max offset ...passed 00:11:11.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.779 Test: blockdev writev readv 8 blocks ...passed 00:11:11.779 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.779 Test: blockdev writev readv block ...passed 00:11:11.779 Test: blockdev writev readv size > 128k ...passed 00:11:11.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.779 Test: blockdev comparev and writev ...passed 00:11:11.779 Test: blockdev nvme passthru rw ...[2024-12-06 09:48:36.848106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.779 [2024-12-06 09:48:36.848142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:11.779 [2024-12-06 09:48:36.848161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.779 [2024-12-06 09:48:36.848172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:11.779 [2024-12-06 09:48:36.848457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.779 [2024-12-06 09:48:36.848473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:11.779 [2024-12-06 09:48:36.848489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.779 [2024-12-06 09:48:36.848500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:11.779 [2024-12-06 09:48:36.848798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.779 [2024-12-06 09:48:36.848815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:11.779 [2024-12-06 09:48:36.848831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.779 [2024-12-06 09:48:36.848842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:11.779 [2024-12-06 09:48:36.849125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.779 [2024-12-06 09:48:36.849141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:11.779 [2024-12-06 09:48:36.849156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.779 [2024-12-06 09:48:36.849166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:11.780 passed 00:11:11.780 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.780 Test: blockdev nvme admin passthru ...[2024-12-06 09:48:36.849949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.780 [2024-12-06 09:48:36.849973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:11.780 [2024-12-06 09:48:36.850094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.780 [2024-12-06 09:48:36.850110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:11.780 [2024-12-06 09:48:36.850212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.780 [2024-12-06 09:48:36.850227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:11.780 [2024-12-06 09:48:36.850333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.780 [2024-12-06 09:48:36.850357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:11.780 passed 00:11:11.780 Test: blockdev copy ...passed 00:11:11.780 00:11:11.780 Run Summary: Type Total Ran Passed Failed Inactive 00:11:11.780 suites 1 1 n/a 0 0 00:11:11.780 tests 23 23 23 0 0 00:11:11.780 asserts 152 152 152 0 n/a 00:11:11.780 00:11:11.780 Elapsed time = 0.155 seconds 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.109 rmmod nvme_tcp 00:11:12.109 rmmod nvme_fabrics 00:11:12.109 rmmod nvme_keyring 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66844 ']' 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66844 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66844 ']' 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66844 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66844 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:12.109 killing process with pid 66844 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66844' 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66844 00:11:12.109 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66844 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:12.387 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:12.388 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:12.647 00:11:12.647 real 0m3.277s 00:11:12.647 user 0m9.596s 00:11:12.647 sys 0m0.966s 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:12.647 ************************************ 00:11:12.647 END TEST nvmf_bdevio 00:11:12.647 ************************************ 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:12.647 00:11:12.647 real 2m36.690s 00:11:12.647 user 6m50.312s 00:11:12.647 sys 0m53.337s 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.647 ************************************ 00:11:12.647 END TEST nvmf_target_core 00:11:12.647 ************************************ 00:11:12.647 09:48:37 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:12.647 09:48:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.647 09:48:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.647 09:48:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.647 ************************************ 00:11:12.647 START TEST nvmf_target_extra 00:11:12.647 ************************************ 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:12.647 * Looking for test storage... 00:11:12.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.647 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.907 --rc genhtml_branch_coverage=1 00:11:12.907 --rc genhtml_function_coverage=1 00:11:12.907 --rc genhtml_legend=1 00:11:12.907 --rc geninfo_all_blocks=1 00:11:12.907 --rc geninfo_unexecuted_blocks=1 00:11:12.907 00:11:12.907 ' 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.907 --rc genhtml_branch_coverage=1 00:11:12.907 --rc genhtml_function_coverage=1 00:11:12.907 --rc genhtml_legend=1 00:11:12.907 --rc geninfo_all_blocks=1 00:11:12.907 --rc geninfo_unexecuted_blocks=1 00:11:12.907 00:11:12.907 ' 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.907 --rc genhtml_branch_coverage=1 00:11:12.907 --rc genhtml_function_coverage=1 00:11:12.907 --rc genhtml_legend=1 00:11:12.907 --rc geninfo_all_blocks=1 00:11:12.907 --rc geninfo_unexecuted_blocks=1 00:11:12.907 00:11:12.907 ' 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.907 --rc genhtml_branch_coverage=1 00:11:12.907 --rc genhtml_function_coverage=1 00:11:12.907 --rc genhtml_legend=1 00:11:12.907 --rc geninfo_all_blocks=1 00:11:12.907 --rc geninfo_unexecuted_blocks=1 00:11:12.907 00:11:12.907 ' 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.907 09:48:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.907 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:12.907 09:48:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.908 ************************************ 00:11:12.908 START TEST nvmf_auth_target 00:11:12.908 ************************************ 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:12.908 * Looking for test storage... 00:11:12.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:12.908 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.168 --rc genhtml_branch_coverage=1 00:11:13.168 --rc genhtml_function_coverage=1 00:11:13.168 --rc genhtml_legend=1 00:11:13.168 --rc geninfo_all_blocks=1 00:11:13.168 --rc geninfo_unexecuted_blocks=1 00:11:13.168 00:11:13.168 ' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.168 --rc genhtml_branch_coverage=1 00:11:13.168 --rc genhtml_function_coverage=1 00:11:13.168 --rc genhtml_legend=1 00:11:13.168 --rc geninfo_all_blocks=1 00:11:13.168 --rc geninfo_unexecuted_blocks=1 00:11:13.168 00:11:13.168 ' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.168 --rc genhtml_branch_coverage=1 00:11:13.168 --rc genhtml_function_coverage=1 00:11:13.168 --rc genhtml_legend=1 00:11:13.168 --rc geninfo_all_blocks=1 00:11:13.168 --rc geninfo_unexecuted_blocks=1 00:11:13.168 00:11:13.168 ' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.168 --rc genhtml_branch_coverage=1 00:11:13.168 --rc genhtml_function_coverage=1 00:11:13.168 --rc genhtml_legend=1 00:11:13.168 --rc geninfo_all_blocks=1 00:11:13.168 --rc geninfo_unexecuted_blocks=1 00:11:13.168 00:11:13.168 ' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.168 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.168 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:13.169 Cannot find device "nvmf_init_br" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:13.169 Cannot find device "nvmf_init_br2" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:13.169 Cannot find device "nvmf_tgt_br" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:13.169 Cannot find device "nvmf_tgt_br2" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:13.169 Cannot find device "nvmf_init_br" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:13.169 Cannot find device "nvmf_init_br2" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:13.169 Cannot find device "nvmf_tgt_br" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:13.169 Cannot find device "nvmf_tgt_br2" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:13.169 Cannot find device "nvmf_br" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:13.169 Cannot find device "nvmf_init_if" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:13.169 Cannot find device "nvmf_init_if2" 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:13.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:13.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:13.169 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:13.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:13.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:11:13.428 00:11:13.428 --- 10.0.0.3 ping statistics --- 00:11:13.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.428 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:13.428 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:13.428 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:11:13.428 00:11:13.428 --- 10.0.0.4 ping statistics --- 00:11:13.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.428 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:13.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:13.428 00:11:13.428 --- 10.0.0.1 ping statistics --- 00:11:13.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.428 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:13.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:11:13.428 00:11:13.428 --- 10.0.0.2 ping statistics --- 00:11:13.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.428 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67165 00:11:13.428 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67165 00:11:13.429 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:13.429 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67165 ']' 00:11:13.429 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.429 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.429 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.429 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.429 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67190 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4474fa30cc39b802efc4c551144f12c3789474d461353515 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Sle 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4474fa30cc39b802efc4c551144f12c3789474d461353515 0 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4474fa30cc39b802efc4c551144f12c3789474d461353515 0 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4474fa30cc39b802efc4c551144f12c3789474d461353515 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Sle 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Sle 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Sle 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2cf4309f1961c51550fa614505abef2845c3e6bca6b4a1e0ae7868c6a474a59 00:11:13.997 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Lk8 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2cf4309f1961c51550fa614505abef2845c3e6bca6b4a1e0ae7868c6a474a59 3 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2cf4309f1961c51550fa614505abef2845c3e6bca6b4a1e0ae7868c6a474a59 3 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2cf4309f1961c51550fa614505abef2845c3e6bca6b4a1e0ae7868c6a474a59 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Lk8 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Lk8 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Lk8 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fe441429f0780bea73d89af924dd5a0d 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.o1v 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fe441429f0780bea73d89af924dd5a0d 1 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fe441429f0780bea73d89af924dd5a0d 1 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fe441429f0780bea73d89af924dd5a0d 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:13.998 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.o1v 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.o1v 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.o1v 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bb5ba80b3454545596c4205f0144845f5263a1f1f66c7c17 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ovk 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bb5ba80b3454545596c4205f0144845f5263a1f1f66c7c17 2 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bb5ba80b3454545596c4205f0144845f5263a1f1f66c7c17 2 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bb5ba80b3454545596c4205f0144845f5263a1f1f66c7c17 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ovk 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ovk 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ovk 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=57659fe3e9d8e085fcd029eb210f990ef55c25908ca9a362 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2DG 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 57659fe3e9d8e085fcd029eb210f990ef55c25908ca9a362 2 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 57659fe3e9d8e085fcd029eb210f990ef55c25908ca9a362 2 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:14.257 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=57659fe3e9d8e085fcd029eb210f990ef55c25908ca9a362 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2DG 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2DG 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.2DG 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=12ce57b4871ef4f2f1d3536e232d2164 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Xrv 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 12ce57b4871ef4f2f1d3536e232d2164 1 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 12ce57b4871ef4f2f1d3536e232d2164 1 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=12ce57b4871ef4f2f1d3536e232d2164 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:14.258 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Xrv 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Xrv 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Xrv 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7c1c689f552c79559ffb59d7342627c540ebe656e4c9636766fe160436f62830 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1rI 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7c1c689f552c79559ffb59d7342627c540ebe656e4c9636766fe160436f62830 3 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7c1c689f552c79559ffb59d7342627c540ebe656e4c9636766fe160436f62830 3 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7c1c689f552c79559ffb59d7342627c540ebe656e4c9636766fe160436f62830 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1rI 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1rI 00:11:14.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.1rI 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67165 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67165 ']' 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.517 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67190 /var/tmp/host.sock 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67190 ']' 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.776 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Sle 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Sle 00:11:15.035 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Sle 00:11:15.294 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Lk8 ]] 00:11:15.294 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Lk8 00:11:15.294 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.294 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.553 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.553 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Lk8 00:11:15.553 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Lk8 00:11:15.811 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:15.812 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.o1v 00:11:15.812 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.812 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.812 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.812 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.o1v 00:11:15.812 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.o1v 00:11:16.071 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ovk ]] 00:11:16.071 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ovk 00:11:16.071 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.071 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.071 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.071 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ovk 00:11:16.071 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ovk 00:11:16.329 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:16.329 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2DG 00:11:16.329 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.329 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.329 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.330 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.2DG 00:11:16.330 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.2DG 00:11:16.588 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Xrv ]] 00:11:16.588 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xrv 00:11:16.588 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.588 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.588 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.588 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xrv 00:11:16.588 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xrv 00:11:16.848 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:16.848 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1rI 00:11:16.848 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.848 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.848 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.848 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1rI 00:11:16.848 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1rI 00:11:17.107 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:17.107 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:17.107 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.107 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.107 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:17.107 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.366 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.625 00:11:17.626 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.626 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.626 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.884 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.885 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.885 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.885 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.885 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.885 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.885 { 00:11:17.885 "cntlid": 1, 00:11:17.885 "qid": 0, 00:11:17.885 "state": "enabled", 00:11:17.885 "thread": "nvmf_tgt_poll_group_000", 00:11:17.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:17.885 "listen_address": { 00:11:17.885 "trtype": "TCP", 00:11:17.885 "adrfam": "IPv4", 00:11:17.885 "traddr": "10.0.0.3", 00:11:17.885 "trsvcid": "4420" 00:11:17.885 }, 00:11:17.885 "peer_address": { 00:11:17.885 "trtype": "TCP", 00:11:17.885 "adrfam": "IPv4", 00:11:17.885 "traddr": "10.0.0.1", 00:11:17.885 "trsvcid": "56744" 00:11:17.885 }, 00:11:17.885 "auth": { 00:11:17.885 "state": "completed", 00:11:17.885 "digest": "sha256", 00:11:17.885 "dhgroup": "null" 00:11:17.885 } 00:11:17.885 } 00:11:17.885 ]' 00:11:17.885 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.144 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.144 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.144 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:18.144 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.144 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.144 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.144 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.403 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:18.403 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:23.673 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.674 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:23.674 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.674 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.674 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.674 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.674 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:23.674 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.674 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.674 { 00:11:23.674 "cntlid": 3, 00:11:23.674 "qid": 0, 00:11:23.674 "state": "enabled", 00:11:23.674 "thread": "nvmf_tgt_poll_group_000", 00:11:23.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:23.674 "listen_address": { 00:11:23.674 "trtype": "TCP", 00:11:23.674 "adrfam": "IPv4", 00:11:23.674 "traddr": "10.0.0.3", 00:11:23.674 "trsvcid": "4420" 00:11:23.674 }, 00:11:23.674 "peer_address": { 00:11:23.674 "trtype": "TCP", 00:11:23.674 "adrfam": "IPv4", 00:11:23.674 "traddr": "10.0.0.1", 00:11:23.674 "trsvcid": "46020" 00:11:23.674 }, 00:11:23.674 "auth": { 00:11:23.674 "state": "completed", 00:11:23.674 "digest": "sha256", 00:11:23.674 "dhgroup": "null" 00:11:23.674 } 00:11:23.674 } 00:11:23.674 ]' 00:11:23.674 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.932 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.932 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.932 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:23.932 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.932 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.932 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.932 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.190 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:24.190 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:24.784 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:24.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:24.785 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.043 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.608 00:11:25.608 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.608 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.608 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.866 { 00:11:25.866 "cntlid": 5, 00:11:25.866 "qid": 0, 00:11:25.866 "state": "enabled", 00:11:25.866 "thread": "nvmf_tgt_poll_group_000", 00:11:25.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:25.866 "listen_address": { 00:11:25.866 "trtype": "TCP", 00:11:25.866 "adrfam": "IPv4", 00:11:25.866 "traddr": "10.0.0.3", 00:11:25.866 "trsvcid": "4420" 00:11:25.866 }, 00:11:25.866 "peer_address": { 00:11:25.866 "trtype": "TCP", 00:11:25.866 "adrfam": "IPv4", 00:11:25.866 "traddr": "10.0.0.1", 00:11:25.866 "trsvcid": "46054" 00:11:25.866 }, 00:11:25.866 "auth": { 00:11:25.866 "state": "completed", 00:11:25.866 "digest": "sha256", 00:11:25.866 "dhgroup": "null" 00:11:25.866 } 00:11:25.866 } 00:11:25.866 ]' 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.866 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.866 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:25.866 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.866 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.866 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.867 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.124 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:11:26.124 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:11:26.689 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.689 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:26.689 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.689 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.689 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.689 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.689 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:26.689 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:27.254 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.255 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.255 00:11:27.513 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.513 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.513 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.773 { 00:11:27.773 "cntlid": 7, 00:11:27.773 "qid": 0, 00:11:27.773 "state": "enabled", 00:11:27.773 "thread": "nvmf_tgt_poll_group_000", 00:11:27.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:27.773 "listen_address": { 00:11:27.773 "trtype": "TCP", 00:11:27.773 "adrfam": "IPv4", 00:11:27.773 "traddr": "10.0.0.3", 00:11:27.773 "trsvcid": "4420" 00:11:27.773 }, 00:11:27.773 "peer_address": { 00:11:27.773 "trtype": "TCP", 00:11:27.773 "adrfam": "IPv4", 00:11:27.773 "traddr": "10.0.0.1", 00:11:27.773 "trsvcid": "46062" 00:11:27.773 }, 00:11:27.773 "auth": { 00:11:27.773 "state": "completed", 00:11:27.773 "digest": "sha256", 00:11:27.773 "dhgroup": "null" 00:11:27.773 } 00:11:27.773 } 00:11:27.773 ]' 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.773 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.343 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:11:28.343 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:28.911 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.911 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.478 00:11:29.478 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.478 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.478 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.737 { 00:11:29.737 "cntlid": 9, 00:11:29.737 "qid": 0, 00:11:29.737 "state": "enabled", 00:11:29.737 "thread": "nvmf_tgt_poll_group_000", 00:11:29.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:29.737 "listen_address": { 00:11:29.737 "trtype": "TCP", 00:11:29.737 "adrfam": "IPv4", 00:11:29.737 "traddr": "10.0.0.3", 00:11:29.737 "trsvcid": "4420" 00:11:29.737 }, 00:11:29.737 "peer_address": { 00:11:29.737 "trtype": "TCP", 00:11:29.737 "adrfam": "IPv4", 00:11:29.737 "traddr": "10.0.0.1", 00:11:29.737 "trsvcid": "53322" 00:11:29.737 }, 00:11:29.737 "auth": { 00:11:29.737 "state": "completed", 00:11:29.737 "digest": "sha256", 00:11:29.737 "dhgroup": "ffdhe2048" 00:11:29.737 } 00:11:29.737 } 00:11:29.737 ]' 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:29.737 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.738 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.738 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.738 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.996 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:29.996 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:30.568 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.568 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:30.568 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.568 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.568 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.568 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.568 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:30.568 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.136 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.395 00:11:31.395 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.395 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.395 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.653 { 00:11:31.653 "cntlid": 11, 00:11:31.653 "qid": 0, 00:11:31.653 "state": "enabled", 00:11:31.653 "thread": "nvmf_tgt_poll_group_000", 00:11:31.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:31.653 "listen_address": { 00:11:31.653 "trtype": "TCP", 00:11:31.653 "adrfam": "IPv4", 00:11:31.653 "traddr": "10.0.0.3", 00:11:31.653 "trsvcid": "4420" 00:11:31.653 }, 00:11:31.653 "peer_address": { 00:11:31.653 "trtype": "TCP", 00:11:31.653 "adrfam": "IPv4", 00:11:31.653 "traddr": "10.0.0.1", 00:11:31.653 "trsvcid": "53342" 00:11:31.653 }, 00:11:31.653 "auth": { 00:11:31.653 "state": "completed", 00:11:31.653 "digest": "sha256", 00:11:31.653 "dhgroup": "ffdhe2048" 00:11:31.653 } 00:11:31.653 } 00:11:31.653 ]' 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.653 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.912 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:31.912 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.912 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.912 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.912 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.171 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:32.171 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:32.738 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.738 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:32.738 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.738 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.738 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.738 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.738 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:32.738 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.305 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.306 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.565 00:11:33.565 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.565 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.565 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.825 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.825 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.825 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.825 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.825 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.825 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.825 { 00:11:33.825 "cntlid": 13, 00:11:33.825 "qid": 0, 00:11:33.825 "state": "enabled", 00:11:33.825 "thread": "nvmf_tgt_poll_group_000", 00:11:33.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:33.825 "listen_address": { 00:11:33.825 "trtype": "TCP", 00:11:33.825 "adrfam": "IPv4", 00:11:33.825 "traddr": "10.0.0.3", 00:11:33.825 "trsvcid": "4420" 00:11:33.825 }, 00:11:33.825 "peer_address": { 00:11:33.825 "trtype": "TCP", 00:11:33.825 "adrfam": "IPv4", 00:11:33.825 "traddr": "10.0.0.1", 00:11:33.825 "trsvcid": "53380" 00:11:33.825 }, 00:11:33.825 "auth": { 00:11:33.825 "state": "completed", 00:11:33.825 "digest": "sha256", 00:11:33.825 "dhgroup": "ffdhe2048" 00:11:33.825 } 00:11:33.825 } 00:11:33.825 ]' 00:11:33.825 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.825 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.825 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.083 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:34.083 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.083 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.083 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.083 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.343 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:11:34.343 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:11:34.910 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.911 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:34.911 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.911 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.169 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.169 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.169 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:35.169 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.427 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.686 00:11:35.686 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.687 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.687 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.945 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.945 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.945 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.945 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.945 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.945 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.945 { 00:11:35.945 "cntlid": 15, 00:11:35.945 "qid": 0, 00:11:35.945 "state": "enabled", 00:11:35.945 "thread": "nvmf_tgt_poll_group_000", 00:11:35.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:35.945 "listen_address": { 00:11:35.945 "trtype": "TCP", 00:11:35.945 "adrfam": "IPv4", 00:11:35.945 "traddr": "10.0.0.3", 00:11:35.945 "trsvcid": "4420" 00:11:35.945 }, 00:11:35.945 "peer_address": { 00:11:35.945 "trtype": "TCP", 00:11:35.945 "adrfam": "IPv4", 00:11:35.945 "traddr": "10.0.0.1", 00:11:35.945 "trsvcid": "53406" 00:11:35.945 }, 00:11:35.945 "auth": { 00:11:35.945 "state": "completed", 00:11:35.945 "digest": "sha256", 00:11:35.945 "dhgroup": "ffdhe2048" 00:11:35.945 } 00:11:35.945 } 00:11:35.945 ]' 00:11:35.945 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.204 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.204 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.204 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:36.204 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.204 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.204 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.204 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.463 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:11:36.463 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:37.399 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.657 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.916 00:11:37.916 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.916 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.916 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.187 { 00:11:38.187 "cntlid": 17, 00:11:38.187 "qid": 0, 00:11:38.187 "state": "enabled", 00:11:38.187 "thread": "nvmf_tgt_poll_group_000", 00:11:38.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:38.187 "listen_address": { 00:11:38.187 "trtype": "TCP", 00:11:38.187 "adrfam": "IPv4", 00:11:38.187 "traddr": "10.0.0.3", 00:11:38.187 "trsvcid": "4420" 00:11:38.187 }, 00:11:38.187 "peer_address": { 00:11:38.187 "trtype": "TCP", 00:11:38.187 "adrfam": "IPv4", 00:11:38.187 "traddr": "10.0.0.1", 00:11:38.187 "trsvcid": "53418" 00:11:38.187 }, 00:11:38.187 "auth": { 00:11:38.187 "state": "completed", 00:11:38.187 "digest": "sha256", 00:11:38.187 "dhgroup": "ffdhe3072" 00:11:38.187 } 00:11:38.187 } 00:11:38.187 ]' 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.187 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.458 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:38.458 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.458 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.458 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.458 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.717 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:38.717 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:39.285 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.285 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:39.285 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.285 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.285 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.285 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.285 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:39.285 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.853 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.130 00:11:40.130 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.130 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.130 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.388 { 00:11:40.388 "cntlid": 19, 00:11:40.388 "qid": 0, 00:11:40.388 "state": "enabled", 00:11:40.388 "thread": "nvmf_tgt_poll_group_000", 00:11:40.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:40.388 "listen_address": { 00:11:40.388 "trtype": "TCP", 00:11:40.388 "adrfam": "IPv4", 00:11:40.388 "traddr": "10.0.0.3", 00:11:40.388 "trsvcid": "4420" 00:11:40.388 }, 00:11:40.388 "peer_address": { 00:11:40.388 "trtype": "TCP", 00:11:40.388 "adrfam": "IPv4", 00:11:40.388 "traddr": "10.0.0.1", 00:11:40.388 "trsvcid": "33074" 00:11:40.388 }, 00:11:40.388 "auth": { 00:11:40.388 "state": "completed", 00:11:40.388 "digest": "sha256", 00:11:40.388 "dhgroup": "ffdhe3072" 00:11:40.388 } 00:11:40.388 } 00:11:40.388 ]' 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.388 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.645 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:40.645 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.645 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.645 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.645 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.903 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:40.903 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:41.470 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.470 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:41.470 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.470 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.470 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.470 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.470 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:41.470 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.729 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.730 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.730 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.730 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.299 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.299 { 00:11:42.299 "cntlid": 21, 00:11:42.299 "qid": 0, 00:11:42.299 "state": "enabled", 00:11:42.299 "thread": "nvmf_tgt_poll_group_000", 00:11:42.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:42.299 "listen_address": { 00:11:42.299 "trtype": "TCP", 00:11:42.299 "adrfam": "IPv4", 00:11:42.299 "traddr": "10.0.0.3", 00:11:42.299 "trsvcid": "4420" 00:11:42.299 }, 00:11:42.299 "peer_address": { 00:11:42.299 "trtype": "TCP", 00:11:42.299 "adrfam": "IPv4", 00:11:42.299 "traddr": "10.0.0.1", 00:11:42.299 "trsvcid": "33094" 00:11:42.299 }, 00:11:42.299 "auth": { 00:11:42.299 "state": "completed", 00:11:42.299 "digest": "sha256", 00:11:42.299 "dhgroup": "ffdhe3072" 00:11:42.299 } 00:11:42.299 } 00:11:42.299 ]' 00:11:42.299 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.559 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.559 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.559 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:42.559 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.559 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.559 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.559 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.818 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:11:42.818 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.754 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.322 00:11:44.322 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.322 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.322 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.580 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.580 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.580 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.580 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.580 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.580 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.580 { 00:11:44.580 "cntlid": 23, 00:11:44.580 "qid": 0, 00:11:44.580 "state": "enabled", 00:11:44.580 "thread": "nvmf_tgt_poll_group_000", 00:11:44.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:44.580 "listen_address": { 00:11:44.580 "trtype": "TCP", 00:11:44.580 "adrfam": "IPv4", 00:11:44.580 "traddr": "10.0.0.3", 00:11:44.580 "trsvcid": "4420" 00:11:44.580 }, 00:11:44.580 "peer_address": { 00:11:44.580 "trtype": "TCP", 00:11:44.580 "adrfam": "IPv4", 00:11:44.580 "traddr": "10.0.0.1", 00:11:44.580 "trsvcid": "33120" 00:11:44.580 }, 00:11:44.580 "auth": { 00:11:44.580 "state": "completed", 00:11:44.580 "digest": "sha256", 00:11:44.580 "dhgroup": "ffdhe3072" 00:11:44.580 } 00:11:44.580 } 00:11:44.580 ]' 00:11:44.580 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.580 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.581 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.581 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:44.581 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.581 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.581 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.581 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.839 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:11:44.839 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:11:45.404 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.663 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:45.663 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.663 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.663 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.663 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.663 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.663 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:45.663 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:45.922 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:45.922 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.922 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.922 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:45.922 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:45.922 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.922 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.922 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.923 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.923 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.923 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.923 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.923 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.180 00:11:46.180 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.180 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.180 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.437 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.437 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.437 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.437 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.437 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.437 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.437 { 00:11:46.437 "cntlid": 25, 00:11:46.437 "qid": 0, 00:11:46.437 "state": "enabled", 00:11:46.437 "thread": "nvmf_tgt_poll_group_000", 00:11:46.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:46.437 "listen_address": { 00:11:46.437 "trtype": "TCP", 00:11:46.437 "adrfam": "IPv4", 00:11:46.437 "traddr": "10.0.0.3", 00:11:46.437 "trsvcid": "4420" 00:11:46.437 }, 00:11:46.437 "peer_address": { 00:11:46.437 "trtype": "TCP", 00:11:46.437 "adrfam": "IPv4", 00:11:46.437 "traddr": "10.0.0.1", 00:11:46.437 "trsvcid": "33150" 00:11:46.437 }, 00:11:46.437 "auth": { 00:11:46.437 "state": "completed", 00:11:46.437 "digest": "sha256", 00:11:46.437 "dhgroup": "ffdhe4096" 00:11:46.437 } 00:11:46.437 } 00:11:46.437 ]' 00:11:46.437 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.694 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.694 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.694 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:46.694 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.694 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.694 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.694 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.952 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:46.952 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:47.516 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.516 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:47.516 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.516 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.516 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.516 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.516 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:47.516 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.774 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.032 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.032 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.032 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.032 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.290 00:11:48.290 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.290 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.290 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.549 { 00:11:48.549 "cntlid": 27, 00:11:48.549 "qid": 0, 00:11:48.549 "state": "enabled", 00:11:48.549 "thread": "nvmf_tgt_poll_group_000", 00:11:48.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:48.549 "listen_address": { 00:11:48.549 "trtype": "TCP", 00:11:48.549 "adrfam": "IPv4", 00:11:48.549 "traddr": "10.0.0.3", 00:11:48.549 "trsvcid": "4420" 00:11:48.549 }, 00:11:48.549 "peer_address": { 00:11:48.549 "trtype": "TCP", 00:11:48.549 "adrfam": "IPv4", 00:11:48.549 "traddr": "10.0.0.1", 00:11:48.549 "trsvcid": "33166" 00:11:48.549 }, 00:11:48.549 "auth": { 00:11:48.549 "state": "completed", 00:11:48.549 "digest": "sha256", 00:11:48.549 "dhgroup": "ffdhe4096" 00:11:48.549 } 00:11:48.549 } 00:11:48.549 ]' 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.549 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.807 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:48.807 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.807 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.807 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.807 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.066 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:49.066 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:49.632 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.632 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:49.632 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.632 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.632 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.632 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.632 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:49.632 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.891 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.457 00:11:50.457 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.457 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.457 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.715 { 00:11:50.715 "cntlid": 29, 00:11:50.715 "qid": 0, 00:11:50.715 "state": "enabled", 00:11:50.715 "thread": "nvmf_tgt_poll_group_000", 00:11:50.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:50.715 "listen_address": { 00:11:50.715 "trtype": "TCP", 00:11:50.715 "adrfam": "IPv4", 00:11:50.715 "traddr": "10.0.0.3", 00:11:50.715 "trsvcid": "4420" 00:11:50.715 }, 00:11:50.715 "peer_address": { 00:11:50.715 "trtype": "TCP", 00:11:50.715 "adrfam": "IPv4", 00:11:50.715 "traddr": "10.0.0.1", 00:11:50.715 "trsvcid": "59022" 00:11:50.715 }, 00:11:50.715 "auth": { 00:11:50.715 "state": "completed", 00:11:50.715 "digest": "sha256", 00:11:50.715 "dhgroup": "ffdhe4096" 00:11:50.715 } 00:11:50.715 } 00:11:50.715 ]' 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.715 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.283 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:11:51.283 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:11:51.850 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.850 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:51.850 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.850 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.850 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.850 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.850 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:51.850 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.109 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.368 00:11:52.627 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.627 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.627 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.899 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.899 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.899 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.899 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.899 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.899 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.899 { 00:11:52.899 "cntlid": 31, 00:11:52.899 "qid": 0, 00:11:52.899 "state": "enabled", 00:11:52.899 "thread": "nvmf_tgt_poll_group_000", 00:11:52.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:52.899 "listen_address": { 00:11:52.899 "trtype": "TCP", 00:11:52.899 "adrfam": "IPv4", 00:11:52.899 "traddr": "10.0.0.3", 00:11:52.899 "trsvcid": "4420" 00:11:52.899 }, 00:11:52.899 "peer_address": { 00:11:52.899 "trtype": "TCP", 00:11:52.899 "adrfam": "IPv4", 00:11:52.899 "traddr": "10.0.0.1", 00:11:52.899 "trsvcid": "59052" 00:11:52.899 }, 00:11:52.899 "auth": { 00:11:52.899 "state": "completed", 00:11:52.899 "digest": "sha256", 00:11:52.899 "dhgroup": "ffdhe4096" 00:11:52.899 } 00:11:52.899 } 00:11:52.899 ]' 00:11:52.899 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.899 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.899 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.899 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:52.899 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.899 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.899 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.899 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.181 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:11:53.181 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:54.165 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:54.423 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:54.423 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.423 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.423 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:54.423 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:54.423 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.423 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.424 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.424 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.424 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.424 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.424 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.424 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.990 00:11:54.990 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.990 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.990 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.249 { 00:11:55.249 "cntlid": 33, 00:11:55.249 "qid": 0, 00:11:55.249 "state": "enabled", 00:11:55.249 "thread": "nvmf_tgt_poll_group_000", 00:11:55.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:55.249 "listen_address": { 00:11:55.249 "trtype": "TCP", 00:11:55.249 "adrfam": "IPv4", 00:11:55.249 "traddr": "10.0.0.3", 00:11:55.249 "trsvcid": "4420" 00:11:55.249 }, 00:11:55.249 "peer_address": { 00:11:55.249 "trtype": "TCP", 00:11:55.249 "adrfam": "IPv4", 00:11:55.249 "traddr": "10.0.0.1", 00:11:55.249 "trsvcid": "59070" 00:11:55.249 }, 00:11:55.249 "auth": { 00:11:55.249 "state": "completed", 00:11:55.249 "digest": "sha256", 00:11:55.249 "dhgroup": "ffdhe6144" 00:11:55.249 } 00:11:55.249 } 00:11:55.249 ]' 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.249 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.250 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:55.250 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.250 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.250 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.250 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.508 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:55.508 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.446 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.014 00:11:57.014 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.015 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.015 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.273 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.273 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.273 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.273 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.273 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.273 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.273 { 00:11:57.273 "cntlid": 35, 00:11:57.273 "qid": 0, 00:11:57.273 "state": "enabled", 00:11:57.273 "thread": "nvmf_tgt_poll_group_000", 00:11:57.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:57.273 "listen_address": { 00:11:57.273 "trtype": "TCP", 00:11:57.273 "adrfam": "IPv4", 00:11:57.273 "traddr": "10.0.0.3", 00:11:57.273 "trsvcid": "4420" 00:11:57.273 }, 00:11:57.273 "peer_address": { 00:11:57.273 "trtype": "TCP", 00:11:57.273 "adrfam": "IPv4", 00:11:57.273 "traddr": "10.0.0.1", 00:11:57.273 "trsvcid": "59084" 00:11:57.273 }, 00:11:57.273 "auth": { 00:11:57.273 "state": "completed", 00:11:57.273 "digest": "sha256", 00:11:57.273 "dhgroup": "ffdhe6144" 00:11:57.273 } 00:11:57.273 } 00:11:57.273 ]' 00:11:57.532 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.532 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.532 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.532 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:57.532 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.532 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.532 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.532 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.791 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:57.791 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.729 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.298 00:11:59.298 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.298 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.298 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.557 { 00:11:59.557 "cntlid": 37, 00:11:59.557 "qid": 0, 00:11:59.557 "state": "enabled", 00:11:59.557 "thread": "nvmf_tgt_poll_group_000", 00:11:59.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:11:59.557 "listen_address": { 00:11:59.557 "trtype": "TCP", 00:11:59.557 "adrfam": "IPv4", 00:11:59.557 "traddr": "10.0.0.3", 00:11:59.557 "trsvcid": "4420" 00:11:59.557 }, 00:11:59.557 "peer_address": { 00:11:59.557 "trtype": "TCP", 00:11:59.557 "adrfam": "IPv4", 00:11:59.557 "traddr": "10.0.0.1", 00:11:59.557 "trsvcid": "36586" 00:11:59.557 }, 00:11:59.557 "auth": { 00:11:59.557 "state": "completed", 00:11:59.557 "digest": "sha256", 00:11:59.557 "dhgroup": "ffdhe6144" 00:11:59.557 } 00:11:59.557 } 00:11:59.557 ]' 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:59.557 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.816 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.816 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.816 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.074 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:00.074 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:00.638 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.638 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:00.638 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.638 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.638 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.638 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.639 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:00.639 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.896 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.896 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.896 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:00.896 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:00.896 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.154 00:12:01.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.670 { 00:12:01.670 "cntlid": 39, 00:12:01.670 "qid": 0, 00:12:01.670 "state": "enabled", 00:12:01.670 "thread": "nvmf_tgt_poll_group_000", 00:12:01.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:01.670 "listen_address": { 00:12:01.670 "trtype": "TCP", 00:12:01.670 "adrfam": "IPv4", 00:12:01.670 "traddr": "10.0.0.3", 00:12:01.670 "trsvcid": "4420" 00:12:01.670 }, 00:12:01.670 "peer_address": { 00:12:01.670 "trtype": "TCP", 00:12:01.670 "adrfam": "IPv4", 00:12:01.670 "traddr": "10.0.0.1", 00:12:01.670 "trsvcid": "36616" 00:12:01.670 }, 00:12:01.670 "auth": { 00:12:01.670 "state": "completed", 00:12:01.670 "digest": "sha256", 00:12:01.670 "dhgroup": "ffdhe6144" 00:12:01.670 } 00:12:01.670 } 00:12:01.670 ]' 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.670 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.929 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:01.929 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:02.864 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:02.864 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:02.864 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.864 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:02.864 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:02.864 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:02.864 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.865 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.865 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.865 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.865 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.865 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.865 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.865 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.801 00:12:03.801 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.801 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.801 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.061 { 00:12:04.061 "cntlid": 41, 00:12:04.061 "qid": 0, 00:12:04.061 "state": "enabled", 00:12:04.061 "thread": "nvmf_tgt_poll_group_000", 00:12:04.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:04.061 "listen_address": { 00:12:04.061 "trtype": "TCP", 00:12:04.061 "adrfam": "IPv4", 00:12:04.061 "traddr": "10.0.0.3", 00:12:04.061 "trsvcid": "4420" 00:12:04.061 }, 00:12:04.061 "peer_address": { 00:12:04.061 "trtype": "TCP", 00:12:04.061 "adrfam": "IPv4", 00:12:04.061 "traddr": "10.0.0.1", 00:12:04.061 "trsvcid": "36638" 00:12:04.061 }, 00:12:04.061 "auth": { 00:12:04.061 "state": "completed", 00:12:04.061 "digest": "sha256", 00:12:04.061 "dhgroup": "ffdhe8192" 00:12:04.061 } 00:12:04.061 } 00:12:04.061 ]' 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.061 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.320 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:04.320 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:04.888 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.889 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:04.889 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.889 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.889 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.889 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.889 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:04.889 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.147 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.404 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.404 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.404 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.404 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.970 00:12:05.970 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.970 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.970 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.970 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.970 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.970 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.970 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.970 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.970 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.970 { 00:12:05.970 "cntlid": 43, 00:12:05.970 "qid": 0, 00:12:05.970 "state": "enabled", 00:12:05.970 "thread": "nvmf_tgt_poll_group_000", 00:12:05.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:05.970 "listen_address": { 00:12:05.970 "trtype": "TCP", 00:12:05.970 "adrfam": "IPv4", 00:12:05.970 "traddr": "10.0.0.3", 00:12:05.970 "trsvcid": "4420" 00:12:05.970 }, 00:12:05.970 "peer_address": { 00:12:05.970 "trtype": "TCP", 00:12:05.970 "adrfam": "IPv4", 00:12:05.970 "traddr": "10.0.0.1", 00:12:05.970 "trsvcid": "36672" 00:12:05.970 }, 00:12:05.970 "auth": { 00:12:05.970 "state": "completed", 00:12:05.970 "digest": "sha256", 00:12:05.970 "dhgroup": "ffdhe8192" 00:12:05.970 } 00:12:05.970 } 00:12:05.970 ]' 00:12:05.970 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.228 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.228 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.228 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:06.228 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.228 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.228 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.228 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.486 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:06.486 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:07.052 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.052 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:07.053 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.053 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.053 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.053 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.053 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:07.053 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.311 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.900 00:12:07.900 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.900 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.900 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.467 { 00:12:08.467 "cntlid": 45, 00:12:08.467 "qid": 0, 00:12:08.467 "state": "enabled", 00:12:08.467 "thread": "nvmf_tgt_poll_group_000", 00:12:08.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:08.467 "listen_address": { 00:12:08.467 "trtype": "TCP", 00:12:08.467 "adrfam": "IPv4", 00:12:08.467 "traddr": "10.0.0.3", 00:12:08.467 "trsvcid": "4420" 00:12:08.467 }, 00:12:08.467 "peer_address": { 00:12:08.467 "trtype": "TCP", 00:12:08.467 "adrfam": "IPv4", 00:12:08.467 "traddr": "10.0.0.1", 00:12:08.467 "trsvcid": "36702" 00:12:08.467 }, 00:12:08.467 "auth": { 00:12:08.467 "state": "completed", 00:12:08.467 "digest": "sha256", 00:12:08.467 "dhgroup": "ffdhe8192" 00:12:08.467 } 00:12:08.467 } 00:12:08.467 ]' 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.467 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.725 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:08.725 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.660 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.226 00:12:10.226 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.226 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.226 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.792 { 00:12:10.792 "cntlid": 47, 00:12:10.792 "qid": 0, 00:12:10.792 "state": "enabled", 00:12:10.792 "thread": "nvmf_tgt_poll_group_000", 00:12:10.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:10.792 "listen_address": { 00:12:10.792 "trtype": "TCP", 00:12:10.792 "adrfam": "IPv4", 00:12:10.792 "traddr": "10.0.0.3", 00:12:10.792 "trsvcid": "4420" 00:12:10.792 }, 00:12:10.792 "peer_address": { 00:12:10.792 "trtype": "TCP", 00:12:10.792 "adrfam": "IPv4", 00:12:10.792 "traddr": "10.0.0.1", 00:12:10.792 "trsvcid": "55594" 00:12:10.792 }, 00:12:10.792 "auth": { 00:12:10.792 "state": "completed", 00:12:10.792 "digest": "sha256", 00:12:10.792 "dhgroup": "ffdhe8192" 00:12:10.792 } 00:12:10.792 } 00:12:10.792 ]' 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.792 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.050 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:11.050 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:11.617 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.876 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.876 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.876 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.876 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.876 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.135 00:12:12.135 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.135 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.135 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.394 { 00:12:12.394 "cntlid": 49, 00:12:12.394 "qid": 0, 00:12:12.394 "state": "enabled", 00:12:12.394 "thread": "nvmf_tgt_poll_group_000", 00:12:12.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:12.394 "listen_address": { 00:12:12.394 "trtype": "TCP", 00:12:12.394 "adrfam": "IPv4", 00:12:12.394 "traddr": "10.0.0.3", 00:12:12.394 "trsvcid": "4420" 00:12:12.394 }, 00:12:12.394 "peer_address": { 00:12:12.394 "trtype": "TCP", 00:12:12.394 "adrfam": "IPv4", 00:12:12.394 "traddr": "10.0.0.1", 00:12:12.394 "trsvcid": "55624" 00:12:12.394 }, 00:12:12.394 "auth": { 00:12:12.394 "state": "completed", 00:12:12.394 "digest": "sha384", 00:12:12.394 "dhgroup": "null" 00:12:12.394 } 00:12:12.394 } 00:12:12.394 ]' 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:12.394 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.653 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.653 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.653 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.911 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:12.911 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:13.477 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.477 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:13.477 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.477 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.477 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.477 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.477 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:13.477 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.735 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.993 00:12:13.993 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.993 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.993 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.251 { 00:12:14.251 "cntlid": 51, 00:12:14.251 "qid": 0, 00:12:14.251 "state": "enabled", 00:12:14.251 "thread": "nvmf_tgt_poll_group_000", 00:12:14.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:14.251 "listen_address": { 00:12:14.251 "trtype": "TCP", 00:12:14.251 "adrfam": "IPv4", 00:12:14.251 "traddr": "10.0.0.3", 00:12:14.251 "trsvcid": "4420" 00:12:14.251 }, 00:12:14.251 "peer_address": { 00:12:14.251 "trtype": "TCP", 00:12:14.251 "adrfam": "IPv4", 00:12:14.251 "traddr": "10.0.0.1", 00:12:14.251 "trsvcid": "55662" 00:12:14.251 }, 00:12:14.251 "auth": { 00:12:14.251 "state": "completed", 00:12:14.251 "digest": "sha384", 00:12:14.251 "dhgroup": "null" 00:12:14.251 } 00:12:14.251 } 00:12:14.251 ]' 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.251 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.510 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:14.510 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.510 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.510 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.510 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.770 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:14.770 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:15.338 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.338 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:15.338 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.338 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.338 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.338 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.338 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:15.338 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.597 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.856 00:12:15.856 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.856 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.856 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.424 { 00:12:16.424 "cntlid": 53, 00:12:16.424 "qid": 0, 00:12:16.424 "state": "enabled", 00:12:16.424 "thread": "nvmf_tgt_poll_group_000", 00:12:16.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:16.424 "listen_address": { 00:12:16.424 "trtype": "TCP", 00:12:16.424 "adrfam": "IPv4", 00:12:16.424 "traddr": "10.0.0.3", 00:12:16.424 "trsvcid": "4420" 00:12:16.424 }, 00:12:16.424 "peer_address": { 00:12:16.424 "trtype": "TCP", 00:12:16.424 "adrfam": "IPv4", 00:12:16.424 "traddr": "10.0.0.1", 00:12:16.424 "trsvcid": "55682" 00:12:16.424 }, 00:12:16.424 "auth": { 00:12:16.424 "state": "completed", 00:12:16.424 "digest": "sha384", 00:12:16.424 "dhgroup": "null" 00:12:16.424 } 00:12:16.424 } 00:12:16.424 ]' 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.424 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.425 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.684 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:16.684 09:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:17.252 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.252 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:17.252 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.252 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.252 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.252 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.252 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:17.252 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.512 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.772 00:12:17.772 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.772 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.772 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.031 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.031 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.031 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.031 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.031 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.031 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.031 { 00:12:18.031 "cntlid": 55, 00:12:18.031 "qid": 0, 00:12:18.031 "state": "enabled", 00:12:18.031 "thread": "nvmf_tgt_poll_group_000", 00:12:18.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:18.031 "listen_address": { 00:12:18.031 "trtype": "TCP", 00:12:18.031 "adrfam": "IPv4", 00:12:18.032 "traddr": "10.0.0.3", 00:12:18.032 "trsvcid": "4420" 00:12:18.032 }, 00:12:18.032 "peer_address": { 00:12:18.032 "trtype": "TCP", 00:12:18.032 "adrfam": "IPv4", 00:12:18.032 "traddr": "10.0.0.1", 00:12:18.032 "trsvcid": "55714" 00:12:18.032 }, 00:12:18.032 "auth": { 00:12:18.032 "state": "completed", 00:12:18.032 "digest": "sha384", 00:12:18.032 "dhgroup": "null" 00:12:18.032 } 00:12:18.032 } 00:12:18.032 ]' 00:12:18.032 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.032 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.032 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.032 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:18.032 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.290 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.290 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.290 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.549 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:18.549 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:19.118 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:19.377 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:19.377 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.378 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.946 00:12:19.946 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.946 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.946 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.204 { 00:12:20.204 "cntlid": 57, 00:12:20.204 "qid": 0, 00:12:20.204 "state": "enabled", 00:12:20.204 "thread": "nvmf_tgt_poll_group_000", 00:12:20.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:20.204 "listen_address": { 00:12:20.204 "trtype": "TCP", 00:12:20.204 "adrfam": "IPv4", 00:12:20.204 "traddr": "10.0.0.3", 00:12:20.204 "trsvcid": "4420" 00:12:20.204 }, 00:12:20.204 "peer_address": { 00:12:20.204 "trtype": "TCP", 00:12:20.204 "adrfam": "IPv4", 00:12:20.204 "traddr": "10.0.0.1", 00:12:20.204 "trsvcid": "49190" 00:12:20.204 }, 00:12:20.204 "auth": { 00:12:20.204 "state": "completed", 00:12:20.204 "digest": "sha384", 00:12:20.204 "dhgroup": "ffdhe2048" 00:12:20.204 } 00:12:20.204 } 00:12:20.204 ]' 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.204 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.462 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:20.462 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:21.398 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.398 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:21.398 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.398 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.398 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.398 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.398 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:21.398 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.399 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.985 00:12:21.985 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.985 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.985 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.244 { 00:12:22.244 "cntlid": 59, 00:12:22.244 "qid": 0, 00:12:22.244 "state": "enabled", 00:12:22.244 "thread": "nvmf_tgt_poll_group_000", 00:12:22.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:22.244 "listen_address": { 00:12:22.244 "trtype": "TCP", 00:12:22.244 "adrfam": "IPv4", 00:12:22.244 "traddr": "10.0.0.3", 00:12:22.244 "trsvcid": "4420" 00:12:22.244 }, 00:12:22.244 "peer_address": { 00:12:22.244 "trtype": "TCP", 00:12:22.244 "adrfam": "IPv4", 00:12:22.244 "traddr": "10.0.0.1", 00:12:22.244 "trsvcid": "49228" 00:12:22.244 }, 00:12:22.244 "auth": { 00:12:22.244 "state": "completed", 00:12:22.244 "digest": "sha384", 00:12:22.244 "dhgroup": "ffdhe2048" 00:12:22.244 } 00:12:22.244 } 00:12:22.244 ]' 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.244 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.503 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:22.503 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.463 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.031 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.031 { 00:12:24.031 "cntlid": 61, 00:12:24.031 "qid": 0, 00:12:24.031 "state": "enabled", 00:12:24.031 "thread": "nvmf_tgt_poll_group_000", 00:12:24.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:24.031 "listen_address": { 00:12:24.031 "trtype": "TCP", 00:12:24.031 "adrfam": "IPv4", 00:12:24.031 "traddr": "10.0.0.3", 00:12:24.031 "trsvcid": "4420" 00:12:24.031 }, 00:12:24.031 "peer_address": { 00:12:24.031 "trtype": "TCP", 00:12:24.031 "adrfam": "IPv4", 00:12:24.031 "traddr": "10.0.0.1", 00:12:24.031 "trsvcid": "49248" 00:12:24.031 }, 00:12:24.031 "auth": { 00:12:24.031 "state": "completed", 00:12:24.031 "digest": "sha384", 00:12:24.031 "dhgroup": "ffdhe2048" 00:12:24.031 } 00:12:24.031 } 00:12:24.031 ]' 00:12:24.031 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.290 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.290 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.290 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:24.290 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.290 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.290 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.290 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.548 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:24.549 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:25.116 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.116 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:25.116 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.116 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.116 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.116 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.116 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:25.116 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.375 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.634 00:12:25.893 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.893 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.893 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.893 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.893 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.893 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.893 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.893 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.152 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.152 { 00:12:26.153 "cntlid": 63, 00:12:26.153 "qid": 0, 00:12:26.153 "state": "enabled", 00:12:26.153 "thread": "nvmf_tgt_poll_group_000", 00:12:26.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:26.153 "listen_address": { 00:12:26.153 "trtype": "TCP", 00:12:26.153 "adrfam": "IPv4", 00:12:26.153 "traddr": "10.0.0.3", 00:12:26.153 "trsvcid": "4420" 00:12:26.153 }, 00:12:26.153 "peer_address": { 00:12:26.153 "trtype": "TCP", 00:12:26.153 "adrfam": "IPv4", 00:12:26.153 "traddr": "10.0.0.1", 00:12:26.153 "trsvcid": "49270" 00:12:26.153 }, 00:12:26.153 "auth": { 00:12:26.153 "state": "completed", 00:12:26.153 "digest": "sha384", 00:12:26.153 "dhgroup": "ffdhe2048" 00:12:26.153 } 00:12:26.153 } 00:12:26.153 ]' 00:12:26.153 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.153 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.153 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.153 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:26.153 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.153 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.153 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.153 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.411 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:26.411 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:26.978 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.236 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.802 00:12:27.802 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.802 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.802 09:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.061 { 00:12:28.061 "cntlid": 65, 00:12:28.061 "qid": 0, 00:12:28.061 "state": "enabled", 00:12:28.061 "thread": "nvmf_tgt_poll_group_000", 00:12:28.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:28.061 "listen_address": { 00:12:28.061 "trtype": "TCP", 00:12:28.061 "adrfam": "IPv4", 00:12:28.061 "traddr": "10.0.0.3", 00:12:28.061 "trsvcid": "4420" 00:12:28.061 }, 00:12:28.061 "peer_address": { 00:12:28.061 "trtype": "TCP", 00:12:28.061 "adrfam": "IPv4", 00:12:28.061 "traddr": "10.0.0.1", 00:12:28.061 "trsvcid": "49310" 00:12:28.061 }, 00:12:28.061 "auth": { 00:12:28.061 "state": "completed", 00:12:28.061 "digest": "sha384", 00:12:28.061 "dhgroup": "ffdhe3072" 00:12:28.061 } 00:12:28.061 } 00:12:28.061 ]' 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.061 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.320 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:28.320 09:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:29.254 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.254 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.255 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.513 00:12:29.513 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.513 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.513 09:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.771 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.771 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.771 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.771 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.771 { 00:12:29.771 "cntlid": 67, 00:12:29.771 "qid": 0, 00:12:29.771 "state": "enabled", 00:12:29.771 "thread": "nvmf_tgt_poll_group_000", 00:12:29.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:29.771 "listen_address": { 00:12:29.771 "trtype": "TCP", 00:12:29.771 "adrfam": "IPv4", 00:12:29.771 "traddr": "10.0.0.3", 00:12:29.771 "trsvcid": "4420" 00:12:29.771 }, 00:12:29.771 "peer_address": { 00:12:29.771 "trtype": "TCP", 00:12:29.771 "adrfam": "IPv4", 00:12:29.771 "traddr": "10.0.0.1", 00:12:29.771 "trsvcid": "59210" 00:12:29.771 }, 00:12:29.771 "auth": { 00:12:29.771 "state": "completed", 00:12:29.771 "digest": "sha384", 00:12:29.771 "dhgroup": "ffdhe3072" 00:12:29.771 } 00:12:29.771 } 00:12:29.771 ]' 00:12:29.771 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.028 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.028 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.028 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:30.028 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.028 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.028 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.028 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.285 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:30.285 09:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:30.850 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.850 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:30.850 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.850 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.850 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.850 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.850 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:30.850 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.415 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.674 00:12:31.674 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.674 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.674 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.933 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.933 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.933 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.933 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.933 { 00:12:31.933 "cntlid": 69, 00:12:31.933 "qid": 0, 00:12:31.933 "state": "enabled", 00:12:31.933 "thread": "nvmf_tgt_poll_group_000", 00:12:31.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:31.933 "listen_address": { 00:12:31.933 "trtype": "TCP", 00:12:31.933 "adrfam": "IPv4", 00:12:31.933 "traddr": "10.0.0.3", 00:12:31.933 "trsvcid": "4420" 00:12:31.933 }, 00:12:31.933 "peer_address": { 00:12:31.933 "trtype": "TCP", 00:12:31.933 "adrfam": "IPv4", 00:12:31.933 "traddr": "10.0.0.1", 00:12:31.933 "trsvcid": "59234" 00:12:31.933 }, 00:12:31.933 "auth": { 00:12:31.933 "state": "completed", 00:12:31.933 "digest": "sha384", 00:12:31.933 "dhgroup": "ffdhe3072" 00:12:31.933 } 00:12:31.933 } 00:12:31.933 ]' 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.933 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.501 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:32.501 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:33.069 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.069 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:33.069 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.069 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.069 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.069 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.069 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:33.069 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:33.327 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:33.327 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.327 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.327 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:33.327 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:33.327 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.327 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:12:33.328 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.328 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.328 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.328 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:33.328 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.328 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.586 00:12:33.586 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.586 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.586 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.845 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.845 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.845 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.845 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.845 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.845 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.845 { 00:12:33.845 "cntlid": 71, 00:12:33.845 "qid": 0, 00:12:33.845 "state": "enabled", 00:12:33.845 "thread": "nvmf_tgt_poll_group_000", 00:12:33.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:33.845 "listen_address": { 00:12:33.845 "trtype": "TCP", 00:12:33.845 "adrfam": "IPv4", 00:12:33.845 "traddr": "10.0.0.3", 00:12:33.846 "trsvcid": "4420" 00:12:33.846 }, 00:12:33.846 "peer_address": { 00:12:33.846 "trtype": "TCP", 00:12:33.846 "adrfam": "IPv4", 00:12:33.846 "traddr": "10.0.0.1", 00:12:33.846 "trsvcid": "59272" 00:12:33.846 }, 00:12:33.846 "auth": { 00:12:33.846 "state": "completed", 00:12:33.846 "digest": "sha384", 00:12:33.846 "dhgroup": "ffdhe3072" 00:12:33.846 } 00:12:33.846 } 00:12:33.846 ]' 00:12:33.846 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.846 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.846 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.104 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:34.104 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.104 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.104 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.104 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.363 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:34.363 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.931 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.189 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.757 00:12:35.757 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.757 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.757 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.036 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.036 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.036 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.036 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.036 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.036 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.036 { 00:12:36.036 "cntlid": 73, 00:12:36.036 "qid": 0, 00:12:36.036 "state": "enabled", 00:12:36.036 "thread": "nvmf_tgt_poll_group_000", 00:12:36.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:36.036 "listen_address": { 00:12:36.036 "trtype": "TCP", 00:12:36.036 "adrfam": "IPv4", 00:12:36.036 "traddr": "10.0.0.3", 00:12:36.036 "trsvcid": "4420" 00:12:36.036 }, 00:12:36.037 "peer_address": { 00:12:36.037 "trtype": "TCP", 00:12:36.037 "adrfam": "IPv4", 00:12:36.037 "traddr": "10.0.0.1", 00:12:36.037 "trsvcid": "59298" 00:12:36.037 }, 00:12:36.037 "auth": { 00:12:36.037 "state": "completed", 00:12:36.037 "digest": "sha384", 00:12:36.037 "dhgroup": "ffdhe4096" 00:12:36.037 } 00:12:36.037 } 00:12:36.037 ]' 00:12:36.037 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.037 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.037 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.037 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:36.037 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.037 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.037 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.037 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.298 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:36.298 09:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:36.903 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.903 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:36.903 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.903 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.903 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.903 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.903 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:36.903 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.189 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.757 00:12:37.757 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.757 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.757 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.016 { 00:12:38.016 "cntlid": 75, 00:12:38.016 "qid": 0, 00:12:38.016 "state": "enabled", 00:12:38.016 "thread": "nvmf_tgt_poll_group_000", 00:12:38.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:38.016 "listen_address": { 00:12:38.016 "trtype": "TCP", 00:12:38.016 "adrfam": "IPv4", 00:12:38.016 "traddr": "10.0.0.3", 00:12:38.016 "trsvcid": "4420" 00:12:38.016 }, 00:12:38.016 "peer_address": { 00:12:38.016 "trtype": "TCP", 00:12:38.016 "adrfam": "IPv4", 00:12:38.016 "traddr": "10.0.0.1", 00:12:38.016 "trsvcid": "59320" 00:12:38.016 }, 00:12:38.016 "auth": { 00:12:38.016 "state": "completed", 00:12:38.016 "digest": "sha384", 00:12:38.016 "dhgroup": "ffdhe4096" 00:12:38.016 } 00:12:38.016 } 00:12:38.016 ]' 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:38.016 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.275 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.275 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.275 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.275 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:38.275 09:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:39.212 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.212 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:39.212 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.212 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.212 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.212 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.212 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:39.212 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.471 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.730 00:12:39.730 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.730 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.730 09:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.989 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.989 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.989 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.989 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.989 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.989 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.989 { 00:12:39.989 "cntlid": 77, 00:12:39.989 "qid": 0, 00:12:39.989 "state": "enabled", 00:12:39.989 "thread": "nvmf_tgt_poll_group_000", 00:12:39.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:39.989 "listen_address": { 00:12:39.989 "trtype": "TCP", 00:12:39.989 "adrfam": "IPv4", 00:12:39.989 "traddr": "10.0.0.3", 00:12:39.989 "trsvcid": "4420" 00:12:39.989 }, 00:12:39.989 "peer_address": { 00:12:39.989 "trtype": "TCP", 00:12:39.989 "adrfam": "IPv4", 00:12:39.989 "traddr": "10.0.0.1", 00:12:39.989 "trsvcid": "60886" 00:12:39.989 }, 00:12:39.989 "auth": { 00:12:39.989 "state": "completed", 00:12:39.989 "digest": "sha384", 00:12:39.989 "dhgroup": "ffdhe4096" 00:12:39.989 } 00:12:39.989 } 00:12:39.989 ]' 00:12:39.989 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.247 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.247 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.247 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:40.247 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.247 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.247 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.247 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.505 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:40.505 09:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:41.070 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.071 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:41.071 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.071 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.071 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.071 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.071 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:41.071 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:41.637 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:41.637 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.637 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:41.637 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:41.637 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:41.637 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.638 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:12:41.638 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.638 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.638 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.638 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:41.638 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.638 09:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.896 00:12:41.896 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.896 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.896 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.155 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.155 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.155 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.155 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.155 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.155 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.155 { 00:12:42.155 "cntlid": 79, 00:12:42.155 "qid": 0, 00:12:42.155 "state": "enabled", 00:12:42.155 "thread": "nvmf_tgt_poll_group_000", 00:12:42.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:42.155 "listen_address": { 00:12:42.155 "trtype": "TCP", 00:12:42.155 "adrfam": "IPv4", 00:12:42.155 "traddr": "10.0.0.3", 00:12:42.155 "trsvcid": "4420" 00:12:42.155 }, 00:12:42.155 "peer_address": { 00:12:42.155 "trtype": "TCP", 00:12:42.155 "adrfam": "IPv4", 00:12:42.155 "traddr": "10.0.0.1", 00:12:42.155 "trsvcid": "60902" 00:12:42.155 }, 00:12:42.155 "auth": { 00:12:42.155 "state": "completed", 00:12:42.155 "digest": "sha384", 00:12:42.155 "dhgroup": "ffdhe4096" 00:12:42.155 } 00:12:42.155 } 00:12:42.155 ]' 00:12:42.155 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.414 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.414 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.414 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:42.414 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.414 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.414 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.414 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.673 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:42.673 09:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:43.241 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.500 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.759 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.759 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.759 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.759 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.018 00:12:44.018 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.018 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.018 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.278 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.278 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.278 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.278 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.538 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.539 { 00:12:44.539 "cntlid": 81, 00:12:44.539 "qid": 0, 00:12:44.539 "state": "enabled", 00:12:44.539 "thread": "nvmf_tgt_poll_group_000", 00:12:44.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:44.539 "listen_address": { 00:12:44.539 "trtype": "TCP", 00:12:44.539 "adrfam": "IPv4", 00:12:44.539 "traddr": "10.0.0.3", 00:12:44.539 "trsvcid": "4420" 00:12:44.539 }, 00:12:44.539 "peer_address": { 00:12:44.539 "trtype": "TCP", 00:12:44.539 "adrfam": "IPv4", 00:12:44.539 "traddr": "10.0.0.1", 00:12:44.539 "trsvcid": "60924" 00:12:44.539 }, 00:12:44.539 "auth": { 00:12:44.539 "state": "completed", 00:12:44.539 "digest": "sha384", 00:12:44.539 "dhgroup": "ffdhe6144" 00:12:44.539 } 00:12:44.539 } 00:12:44.539 ]' 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.539 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.798 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:44.798 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.735 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.302 00:12:46.302 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.302 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.302 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.560 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.560 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.560 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.560 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.560 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.561 { 00:12:46.561 "cntlid": 83, 00:12:46.561 "qid": 0, 00:12:46.561 "state": "enabled", 00:12:46.561 "thread": "nvmf_tgt_poll_group_000", 00:12:46.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:46.561 "listen_address": { 00:12:46.561 "trtype": "TCP", 00:12:46.561 "adrfam": "IPv4", 00:12:46.561 "traddr": "10.0.0.3", 00:12:46.561 "trsvcid": "4420" 00:12:46.561 }, 00:12:46.561 "peer_address": { 00:12:46.561 "trtype": "TCP", 00:12:46.561 "adrfam": "IPv4", 00:12:46.561 "traddr": "10.0.0.1", 00:12:46.561 "trsvcid": "60950" 00:12:46.561 }, 00:12:46.561 "auth": { 00:12:46.561 "state": "completed", 00:12:46.561 "digest": "sha384", 00:12:46.561 "dhgroup": "ffdhe6144" 00:12:46.561 } 00:12:46.561 } 00:12:46.561 ]' 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.561 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.819 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:46.819 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.775 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.343 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.343 { 00:12:48.343 "cntlid": 85, 00:12:48.343 "qid": 0, 00:12:48.343 "state": "enabled", 00:12:48.343 "thread": "nvmf_tgt_poll_group_000", 00:12:48.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:48.343 "listen_address": { 00:12:48.343 "trtype": "TCP", 00:12:48.343 "adrfam": "IPv4", 00:12:48.343 "traddr": "10.0.0.3", 00:12:48.343 "trsvcid": "4420" 00:12:48.343 }, 00:12:48.343 "peer_address": { 00:12:48.343 "trtype": "TCP", 00:12:48.343 "adrfam": "IPv4", 00:12:48.343 "traddr": "10.0.0.1", 00:12:48.343 "trsvcid": "60976" 00:12:48.343 }, 00:12:48.343 "auth": { 00:12:48.343 "state": "completed", 00:12:48.343 "digest": "sha384", 00:12:48.343 "dhgroup": "ffdhe6144" 00:12:48.343 } 00:12:48.343 } 00:12:48.343 ]' 00:12:48.343 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.603 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.603 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.603 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:48.603 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.603 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.603 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.603 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.862 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:48.862 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:49.438 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.703 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.271 00:12:50.271 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.271 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.271 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.529 { 00:12:50.529 "cntlid": 87, 00:12:50.529 "qid": 0, 00:12:50.529 "state": "enabled", 00:12:50.529 "thread": "nvmf_tgt_poll_group_000", 00:12:50.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:50.529 "listen_address": { 00:12:50.529 "trtype": "TCP", 00:12:50.529 "adrfam": "IPv4", 00:12:50.529 "traddr": "10.0.0.3", 00:12:50.529 "trsvcid": "4420" 00:12:50.529 }, 00:12:50.529 "peer_address": { 00:12:50.529 "trtype": "TCP", 00:12:50.529 "adrfam": "IPv4", 00:12:50.529 "traddr": "10.0.0.1", 00:12:50.529 "trsvcid": "35970" 00:12:50.529 }, 00:12:50.529 "auth": { 00:12:50.529 "state": "completed", 00:12:50.529 "digest": "sha384", 00:12:50.529 "dhgroup": "ffdhe6144" 00:12:50.529 } 00:12:50.529 } 00:12:50.529 ]' 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.529 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.787 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:50.787 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.787 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.787 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.787 09:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.045 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:51.045 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:51.612 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.871 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.439 00:12:52.439 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.439 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.439 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.008 09:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.008 { 00:12:53.008 "cntlid": 89, 00:12:53.008 "qid": 0, 00:12:53.008 "state": "enabled", 00:12:53.008 "thread": "nvmf_tgt_poll_group_000", 00:12:53.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:53.008 "listen_address": { 00:12:53.008 "trtype": "TCP", 00:12:53.008 "adrfam": "IPv4", 00:12:53.008 "traddr": "10.0.0.3", 00:12:53.008 "trsvcid": "4420" 00:12:53.008 }, 00:12:53.008 "peer_address": { 00:12:53.008 "trtype": "TCP", 00:12:53.008 "adrfam": "IPv4", 00:12:53.008 "traddr": "10.0.0.1", 00:12:53.008 "trsvcid": "35992" 00:12:53.008 }, 00:12:53.008 "auth": { 00:12:53.008 "state": "completed", 00:12:53.008 "digest": "sha384", 00:12:53.008 "dhgroup": "ffdhe8192" 00:12:53.008 } 00:12:53.008 } 00:12:53.008 ]' 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.008 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.267 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:53.267 09:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:12:53.836 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.096 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.355 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.355 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.355 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.355 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.922 00:12:54.922 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.922 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.922 09:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.199 { 00:12:55.199 "cntlid": 91, 00:12:55.199 "qid": 0, 00:12:55.199 "state": "enabled", 00:12:55.199 "thread": "nvmf_tgt_poll_group_000", 00:12:55.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:55.199 "listen_address": { 00:12:55.199 "trtype": "TCP", 00:12:55.199 "adrfam": "IPv4", 00:12:55.199 "traddr": "10.0.0.3", 00:12:55.199 "trsvcid": "4420" 00:12:55.199 }, 00:12:55.199 "peer_address": { 00:12:55.199 "trtype": "TCP", 00:12:55.199 "adrfam": "IPv4", 00:12:55.199 "traddr": "10.0.0.1", 00:12:55.199 "trsvcid": "36012" 00:12:55.199 }, 00:12:55.199 "auth": { 00:12:55.199 "state": "completed", 00:12:55.199 "digest": "sha384", 00:12:55.199 "dhgroup": "ffdhe8192" 00:12:55.199 } 00:12:55.199 } 00:12:55.199 ]' 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.199 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.200 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.200 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.459 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:55.459 09:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:12:56.399 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.399 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:56.399 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.399 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.399 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.399 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.399 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:56.399 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.658 09:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.227 00:12:57.227 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.227 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.227 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.486 { 00:12:57.486 "cntlid": 93, 00:12:57.486 "qid": 0, 00:12:57.486 "state": "enabled", 00:12:57.486 "thread": "nvmf_tgt_poll_group_000", 00:12:57.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:57.486 "listen_address": { 00:12:57.486 "trtype": "TCP", 00:12:57.486 "adrfam": "IPv4", 00:12:57.486 "traddr": "10.0.0.3", 00:12:57.486 "trsvcid": "4420" 00:12:57.486 }, 00:12:57.486 "peer_address": { 00:12:57.486 "trtype": "TCP", 00:12:57.486 "adrfam": "IPv4", 00:12:57.486 "traddr": "10.0.0.1", 00:12:57.486 "trsvcid": "36032" 00:12:57.486 }, 00:12:57.486 "auth": { 00:12:57.486 "state": "completed", 00:12:57.486 "digest": "sha384", 00:12:57.486 "dhgroup": "ffdhe8192" 00:12:57.486 } 00:12:57.486 } 00:12:57.486 ]' 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.486 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.746 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:57.746 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:12:58.314 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.314 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:12:58.314 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.314 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.314 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.314 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.314 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:58.314 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.883 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.451 00:12:59.451 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.451 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.451 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.451 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.451 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.451 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.451 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.710 { 00:12:59.710 "cntlid": 95, 00:12:59.710 "qid": 0, 00:12:59.710 "state": "enabled", 00:12:59.710 "thread": "nvmf_tgt_poll_group_000", 00:12:59.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:12:59.710 "listen_address": { 00:12:59.710 "trtype": "TCP", 00:12:59.710 "adrfam": "IPv4", 00:12:59.710 "traddr": "10.0.0.3", 00:12:59.710 "trsvcid": "4420" 00:12:59.710 }, 00:12:59.710 "peer_address": { 00:12:59.710 "trtype": "TCP", 00:12:59.710 "adrfam": "IPv4", 00:12:59.710 "traddr": "10.0.0.1", 00:12:59.710 "trsvcid": "60118" 00:12:59.710 }, 00:12:59.710 "auth": { 00:12:59.710 "state": "completed", 00:12:59.710 "digest": "sha384", 00:12:59.710 "dhgroup": "ffdhe8192" 00:12:59.710 } 00:12:59.710 } 00:12:59.710 ]' 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.710 09:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.968 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:12:59.968 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:00.904 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:00.905 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.905 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.473 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.473 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.473 { 00:13:01.473 "cntlid": 97, 00:13:01.473 "qid": 0, 00:13:01.473 "state": "enabled", 00:13:01.473 "thread": "nvmf_tgt_poll_group_000", 00:13:01.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:01.473 "listen_address": { 00:13:01.473 "trtype": "TCP", 00:13:01.473 "adrfam": "IPv4", 00:13:01.473 "traddr": "10.0.0.3", 00:13:01.473 "trsvcid": "4420" 00:13:01.473 }, 00:13:01.473 "peer_address": { 00:13:01.473 "trtype": "TCP", 00:13:01.473 "adrfam": "IPv4", 00:13:01.473 "traddr": "10.0.0.1", 00:13:01.473 "trsvcid": "60136" 00:13:01.473 }, 00:13:01.473 "auth": { 00:13:01.473 "state": "completed", 00:13:01.473 "digest": "sha512", 00:13:01.473 "dhgroup": "null" 00:13:01.473 } 00:13:01.473 } 00:13:01.473 ]' 00:13:01.733 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.733 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.733 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.733 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:01.733 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.733 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.733 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.733 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.992 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:01.992 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:02.560 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.560 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:02.560 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.560 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.560 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.560 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.560 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:02.560 09:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.819 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.387 00:13:03.387 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.387 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.387 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.646 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.646 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.646 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.646 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.646 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.646 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.646 { 00:13:03.646 "cntlid": 99, 00:13:03.646 "qid": 0, 00:13:03.646 "state": "enabled", 00:13:03.646 "thread": "nvmf_tgt_poll_group_000", 00:13:03.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:03.646 "listen_address": { 00:13:03.646 "trtype": "TCP", 00:13:03.646 "adrfam": "IPv4", 00:13:03.646 "traddr": "10.0.0.3", 00:13:03.646 "trsvcid": "4420" 00:13:03.646 }, 00:13:03.646 "peer_address": { 00:13:03.646 "trtype": "TCP", 00:13:03.646 "adrfam": "IPv4", 00:13:03.646 "traddr": "10.0.0.1", 00:13:03.646 "trsvcid": "60160" 00:13:03.646 }, 00:13:03.646 "auth": { 00:13:03.647 "state": "completed", 00:13:03.647 "digest": "sha512", 00:13:03.647 "dhgroup": "null" 00:13:03.647 } 00:13:03.647 } 00:13:03.647 ]' 00:13:03.647 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.647 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.647 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.647 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:03.647 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.647 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.647 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.647 09:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.906 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:03.906 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:04.844 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.844 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:04.844 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.844 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.844 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.844 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.844 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:04.844 09:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.844 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.409 00:13:05.409 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.409 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.409 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.668 { 00:13:05.668 "cntlid": 101, 00:13:05.668 "qid": 0, 00:13:05.668 "state": "enabled", 00:13:05.668 "thread": "nvmf_tgt_poll_group_000", 00:13:05.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:05.668 "listen_address": { 00:13:05.668 "trtype": "TCP", 00:13:05.668 "adrfam": "IPv4", 00:13:05.668 "traddr": "10.0.0.3", 00:13:05.668 "trsvcid": "4420" 00:13:05.668 }, 00:13:05.668 "peer_address": { 00:13:05.668 "trtype": "TCP", 00:13:05.668 "adrfam": "IPv4", 00:13:05.668 "traddr": "10.0.0.1", 00:13:05.668 "trsvcid": "60190" 00:13:05.668 }, 00:13:05.668 "auth": { 00:13:05.668 "state": "completed", 00:13:05.668 "digest": "sha512", 00:13:05.668 "dhgroup": "null" 00:13:05.668 } 00:13:05.668 } 00:13:05.668 ]' 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.668 09:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.927 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:05.927 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:06.495 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.495 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:06.495 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.495 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.495 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.495 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.495 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:06.495 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.754 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.323 00:13:07.323 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.323 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.323 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.582 { 00:13:07.582 "cntlid": 103, 00:13:07.582 "qid": 0, 00:13:07.582 "state": "enabled", 00:13:07.582 "thread": "nvmf_tgt_poll_group_000", 00:13:07.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:07.582 "listen_address": { 00:13:07.582 "trtype": "TCP", 00:13:07.582 "adrfam": "IPv4", 00:13:07.582 "traddr": "10.0.0.3", 00:13:07.582 "trsvcid": "4420" 00:13:07.582 }, 00:13:07.582 "peer_address": { 00:13:07.582 "trtype": "TCP", 00:13:07.582 "adrfam": "IPv4", 00:13:07.582 "traddr": "10.0.0.1", 00:13:07.582 "trsvcid": "60218" 00:13:07.582 }, 00:13:07.582 "auth": { 00:13:07.582 "state": "completed", 00:13:07.582 "digest": "sha512", 00:13:07.582 "dhgroup": "null" 00:13:07.582 } 00:13:07.582 } 00:13:07.582 ]' 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.582 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.840 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:07.840 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:08.408 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.408 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:08.408 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.408 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.409 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.409 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:08.409 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.409 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:08.409 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:08.976 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:08.976 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.976 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.976 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:08.976 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:08.976 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.976 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.976 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.977 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.977 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.977 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.977 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.977 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.235 00:13:09.235 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.235 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.235 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.495 { 00:13:09.495 "cntlid": 105, 00:13:09.495 "qid": 0, 00:13:09.495 "state": "enabled", 00:13:09.495 "thread": "nvmf_tgt_poll_group_000", 00:13:09.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:09.495 "listen_address": { 00:13:09.495 "trtype": "TCP", 00:13:09.495 "adrfam": "IPv4", 00:13:09.495 "traddr": "10.0.0.3", 00:13:09.495 "trsvcid": "4420" 00:13:09.495 }, 00:13:09.495 "peer_address": { 00:13:09.495 "trtype": "TCP", 00:13:09.495 "adrfam": "IPv4", 00:13:09.495 "traddr": "10.0.0.1", 00:13:09.495 "trsvcid": "56996" 00:13:09.495 }, 00:13:09.495 "auth": { 00:13:09.495 "state": "completed", 00:13:09.495 "digest": "sha512", 00:13:09.495 "dhgroup": "ffdhe2048" 00:13:09.495 } 00:13:09.495 } 00:13:09.495 ]' 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.495 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.754 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:09.754 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.701 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.702 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.960 00:13:10.960 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.960 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.960 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.220 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.220 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.220 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.220 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.220 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.220 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.220 { 00:13:11.220 "cntlid": 107, 00:13:11.220 "qid": 0, 00:13:11.220 "state": "enabled", 00:13:11.220 "thread": "nvmf_tgt_poll_group_000", 00:13:11.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:11.220 "listen_address": { 00:13:11.220 "trtype": "TCP", 00:13:11.220 "adrfam": "IPv4", 00:13:11.220 "traddr": "10.0.0.3", 00:13:11.220 "trsvcid": "4420" 00:13:11.220 }, 00:13:11.220 "peer_address": { 00:13:11.220 "trtype": "TCP", 00:13:11.220 "adrfam": "IPv4", 00:13:11.220 "traddr": "10.0.0.1", 00:13:11.220 "trsvcid": "57038" 00:13:11.220 }, 00:13:11.220 "auth": { 00:13:11.220 "state": "completed", 00:13:11.220 "digest": "sha512", 00:13:11.220 "dhgroup": "ffdhe2048" 00:13:11.220 } 00:13:11.220 } 00:13:11.220 ]' 00:13:11.220 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.479 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.479 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.479 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:11.479 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.479 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.479 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.479 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.747 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:11.748 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:12.373 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.373 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:12.373 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.373 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.373 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.373 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.373 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:12.373 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.632 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.633 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.200 00:13:13.200 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.200 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.200 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.459 { 00:13:13.459 "cntlid": 109, 00:13:13.459 "qid": 0, 00:13:13.459 "state": "enabled", 00:13:13.459 "thread": "nvmf_tgt_poll_group_000", 00:13:13.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:13.459 "listen_address": { 00:13:13.459 "trtype": "TCP", 00:13:13.459 "adrfam": "IPv4", 00:13:13.459 "traddr": "10.0.0.3", 00:13:13.459 "trsvcid": "4420" 00:13:13.459 }, 00:13:13.459 "peer_address": { 00:13:13.459 "trtype": "TCP", 00:13:13.459 "adrfam": "IPv4", 00:13:13.459 "traddr": "10.0.0.1", 00:13:13.459 "trsvcid": "57082" 00:13:13.459 }, 00:13:13.459 "auth": { 00:13:13.459 "state": "completed", 00:13:13.459 "digest": "sha512", 00:13:13.459 "dhgroup": "ffdhe2048" 00:13:13.459 } 00:13:13.459 } 00:13:13.459 ]' 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.459 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.717 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:13.717 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:14.656 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.656 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:14.656 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.656 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.656 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.656 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.656 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:14.656 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.915 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.174 00:13:15.174 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.174 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.174 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.433 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.433 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.433 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.433 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.433 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.433 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.433 { 00:13:15.433 "cntlid": 111, 00:13:15.433 "qid": 0, 00:13:15.433 "state": "enabled", 00:13:15.433 "thread": "nvmf_tgt_poll_group_000", 00:13:15.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:15.433 "listen_address": { 00:13:15.433 "trtype": "TCP", 00:13:15.433 "adrfam": "IPv4", 00:13:15.433 "traddr": "10.0.0.3", 00:13:15.433 "trsvcid": "4420" 00:13:15.433 }, 00:13:15.433 "peer_address": { 00:13:15.433 "trtype": "TCP", 00:13:15.433 "adrfam": "IPv4", 00:13:15.433 "traddr": "10.0.0.1", 00:13:15.433 "trsvcid": "57102" 00:13:15.433 }, 00:13:15.433 "auth": { 00:13:15.433 "state": "completed", 00:13:15.433 "digest": "sha512", 00:13:15.433 "dhgroup": "ffdhe2048" 00:13:15.433 } 00:13:15.433 } 00:13:15.433 ]' 00:13:15.433 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.692 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.692 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.692 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:15.692 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.692 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.692 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.692 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.951 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:15.951 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.520 09:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.087 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.347 00:13:17.347 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.347 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.347 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.606 { 00:13:17.606 "cntlid": 113, 00:13:17.606 "qid": 0, 00:13:17.606 "state": "enabled", 00:13:17.606 "thread": "nvmf_tgt_poll_group_000", 00:13:17.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:17.606 "listen_address": { 00:13:17.606 "trtype": "TCP", 00:13:17.606 "adrfam": "IPv4", 00:13:17.606 "traddr": "10.0.0.3", 00:13:17.606 "trsvcid": "4420" 00:13:17.606 }, 00:13:17.606 "peer_address": { 00:13:17.606 "trtype": "TCP", 00:13:17.606 "adrfam": "IPv4", 00:13:17.606 "traddr": "10.0.0.1", 00:13:17.606 "trsvcid": "57126" 00:13:17.606 }, 00:13:17.606 "auth": { 00:13:17.606 "state": "completed", 00:13:17.606 "digest": "sha512", 00:13:17.606 "dhgroup": "ffdhe3072" 00:13:17.606 } 00:13:17.606 } 00:13:17.606 ]' 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.606 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.865 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.865 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.865 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.124 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:18.124 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:18.692 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.692 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:18.692 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.692 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.692 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.692 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.692 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:18.692 09:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:18.951 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:18.951 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.951 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.951 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:18.951 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:18.952 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.952 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.952 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.952 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.952 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.952 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.952 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.952 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.211 00:13:19.211 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.211 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.211 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.470 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.470 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.470 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.470 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.470 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.470 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.470 { 00:13:19.470 "cntlid": 115, 00:13:19.470 "qid": 0, 00:13:19.470 "state": "enabled", 00:13:19.470 "thread": "nvmf_tgt_poll_group_000", 00:13:19.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:19.470 "listen_address": { 00:13:19.470 "trtype": "TCP", 00:13:19.470 "adrfam": "IPv4", 00:13:19.470 "traddr": "10.0.0.3", 00:13:19.470 "trsvcid": "4420" 00:13:19.470 }, 00:13:19.470 "peer_address": { 00:13:19.470 "trtype": "TCP", 00:13:19.470 "adrfam": "IPv4", 00:13:19.470 "traddr": "10.0.0.1", 00:13:19.470 "trsvcid": "52474" 00:13:19.470 }, 00:13:19.470 "auth": { 00:13:19.470 "state": "completed", 00:13:19.470 "digest": "sha512", 00:13:19.470 "dhgroup": "ffdhe3072" 00:13:19.470 } 00:13:19.470 } 00:13:19.470 ]' 00:13:19.470 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.471 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.471 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.730 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:19.730 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.730 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.730 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.730 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.989 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:19.989 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:20.925 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.926 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:20.926 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.926 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.926 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.926 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.926 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:20.926 09:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.926 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.493 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.493 { 00:13:21.493 "cntlid": 117, 00:13:21.493 "qid": 0, 00:13:21.493 "state": "enabled", 00:13:21.493 "thread": "nvmf_tgt_poll_group_000", 00:13:21.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:21.493 "listen_address": { 00:13:21.493 "trtype": "TCP", 00:13:21.493 "adrfam": "IPv4", 00:13:21.493 "traddr": "10.0.0.3", 00:13:21.493 "trsvcid": "4420" 00:13:21.493 }, 00:13:21.493 "peer_address": { 00:13:21.493 "trtype": "TCP", 00:13:21.493 "adrfam": "IPv4", 00:13:21.493 "traddr": "10.0.0.1", 00:13:21.493 "trsvcid": "52490" 00:13:21.493 }, 00:13:21.493 "auth": { 00:13:21.493 "state": "completed", 00:13:21.493 "digest": "sha512", 00:13:21.493 "dhgroup": "ffdhe3072" 00:13:21.493 } 00:13:21.493 } 00:13:21.493 ]' 00:13:21.493 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.752 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.752 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.752 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:21.752 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.752 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.752 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.752 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.012 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:22.012 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:22.958 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.959 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:22.959 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.959 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.959 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.959 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.959 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:22.959 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.959 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.236 00:13:23.236 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.236 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.236 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.495 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.495 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.495 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.495 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.495 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.495 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.495 { 00:13:23.495 "cntlid": 119, 00:13:23.495 "qid": 0, 00:13:23.495 "state": "enabled", 00:13:23.495 "thread": "nvmf_tgt_poll_group_000", 00:13:23.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:23.495 "listen_address": { 00:13:23.495 "trtype": "TCP", 00:13:23.495 "adrfam": "IPv4", 00:13:23.495 "traddr": "10.0.0.3", 00:13:23.495 "trsvcid": "4420" 00:13:23.495 }, 00:13:23.495 "peer_address": { 00:13:23.495 "trtype": "TCP", 00:13:23.495 "adrfam": "IPv4", 00:13:23.495 "traddr": "10.0.0.1", 00:13:23.495 "trsvcid": "52504" 00:13:23.495 }, 00:13:23.495 "auth": { 00:13:23.495 "state": "completed", 00:13:23.495 "digest": "sha512", 00:13:23.495 "dhgroup": "ffdhe3072" 00:13:23.495 } 00:13:23.495 } 00:13:23.495 ]' 00:13:23.495 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.755 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.755 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.755 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:23.755 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.755 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.755 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.755 09:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.014 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:24.014 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.951 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.951 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.519 00:13:25.519 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.519 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.519 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.778 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.778 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.778 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.778 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.778 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.778 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.778 { 00:13:25.778 "cntlid": 121, 00:13:25.778 "qid": 0, 00:13:25.778 "state": "enabled", 00:13:25.778 "thread": "nvmf_tgt_poll_group_000", 00:13:25.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:25.778 "listen_address": { 00:13:25.778 "trtype": "TCP", 00:13:25.778 "adrfam": "IPv4", 00:13:25.778 "traddr": "10.0.0.3", 00:13:25.778 "trsvcid": "4420" 00:13:25.778 }, 00:13:25.778 "peer_address": { 00:13:25.778 "trtype": "TCP", 00:13:25.778 "adrfam": "IPv4", 00:13:25.778 "traddr": "10.0.0.1", 00:13:25.778 "trsvcid": "52540" 00:13:25.778 }, 00:13:25.778 "auth": { 00:13:25.778 "state": "completed", 00:13:25.778 "digest": "sha512", 00:13:25.778 "dhgroup": "ffdhe4096" 00:13:25.778 } 00:13:25.778 } 00:13:25.778 ]' 00:13:25.778 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.779 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.779 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.038 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:26.038 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.038 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.038 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.038 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.297 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:26.297 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:26.865 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.865 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:26.865 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.865 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.865 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.865 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.865 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:26.865 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.125 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.693 00:13:27.693 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.693 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.693 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.952 { 00:13:27.952 "cntlid": 123, 00:13:27.952 "qid": 0, 00:13:27.952 "state": "enabled", 00:13:27.952 "thread": "nvmf_tgt_poll_group_000", 00:13:27.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:27.952 "listen_address": { 00:13:27.952 "trtype": "TCP", 00:13:27.952 "adrfam": "IPv4", 00:13:27.952 "traddr": "10.0.0.3", 00:13:27.952 "trsvcid": "4420" 00:13:27.952 }, 00:13:27.952 "peer_address": { 00:13:27.952 "trtype": "TCP", 00:13:27.952 "adrfam": "IPv4", 00:13:27.952 "traddr": "10.0.0.1", 00:13:27.952 "trsvcid": "52554" 00:13:27.952 }, 00:13:27.952 "auth": { 00:13:27.952 "state": "completed", 00:13:27.952 "digest": "sha512", 00:13:27.952 "dhgroup": "ffdhe4096" 00:13:27.952 } 00:13:27.952 } 00:13:27.952 ]' 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:27.952 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.211 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.211 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.211 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.211 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:28.211 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:29.158 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.158 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:29.158 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.158 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.158 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.158 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.158 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:29.158 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.416 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.674 00:13:29.674 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.674 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.674 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.932 { 00:13:29.932 "cntlid": 125, 00:13:29.932 "qid": 0, 00:13:29.932 "state": "enabled", 00:13:29.932 "thread": "nvmf_tgt_poll_group_000", 00:13:29.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:29.932 "listen_address": { 00:13:29.932 "trtype": "TCP", 00:13:29.932 "adrfam": "IPv4", 00:13:29.932 "traddr": "10.0.0.3", 00:13:29.932 "trsvcid": "4420" 00:13:29.932 }, 00:13:29.932 "peer_address": { 00:13:29.932 "trtype": "TCP", 00:13:29.932 "adrfam": "IPv4", 00:13:29.932 "traddr": "10.0.0.1", 00:13:29.932 "trsvcid": "41160" 00:13:29.932 }, 00:13:29.932 "auth": { 00:13:29.932 "state": "completed", 00:13:29.932 "digest": "sha512", 00:13:29.932 "dhgroup": "ffdhe4096" 00:13:29.932 } 00:13:29.932 } 00:13:29.932 ]' 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:29.932 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.190 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.190 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.190 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.448 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:30.448 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:31.015 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.015 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:31.015 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.015 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.015 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.015 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.015 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:31.015 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:31.274 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.275 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.532 00:13:31.532 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.532 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.532 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.790 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.790 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.790 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.790 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.790 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.790 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.790 { 00:13:31.790 "cntlid": 127, 00:13:31.790 "qid": 0, 00:13:31.790 "state": "enabled", 00:13:31.790 "thread": "nvmf_tgt_poll_group_000", 00:13:31.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:31.790 "listen_address": { 00:13:31.790 "trtype": "TCP", 00:13:31.790 "adrfam": "IPv4", 00:13:31.790 "traddr": "10.0.0.3", 00:13:31.790 "trsvcid": "4420" 00:13:31.790 }, 00:13:31.790 "peer_address": { 00:13:31.790 "trtype": "TCP", 00:13:31.790 "adrfam": "IPv4", 00:13:31.790 "traddr": "10.0.0.1", 00:13:31.790 "trsvcid": "41202" 00:13:31.790 }, 00:13:31.790 "auth": { 00:13:31.790 "state": "completed", 00:13:31.790 "digest": "sha512", 00:13:31.790 "dhgroup": "ffdhe4096" 00:13:31.790 } 00:13:31.790 } 00:13:31.790 ]' 00:13:31.790 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.048 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.048 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.048 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:32.048 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.048 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.048 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.048 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.306 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:32.306 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.872 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.131 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.698 00:13:33.698 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.698 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.698 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.956 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.956 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.956 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.956 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.956 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.957 { 00:13:33.957 "cntlid": 129, 00:13:33.957 "qid": 0, 00:13:33.957 "state": "enabled", 00:13:33.957 "thread": "nvmf_tgt_poll_group_000", 00:13:33.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:33.957 "listen_address": { 00:13:33.957 "trtype": "TCP", 00:13:33.957 "adrfam": "IPv4", 00:13:33.957 "traddr": "10.0.0.3", 00:13:33.957 "trsvcid": "4420" 00:13:33.957 }, 00:13:33.957 "peer_address": { 00:13:33.957 "trtype": "TCP", 00:13:33.957 "adrfam": "IPv4", 00:13:33.957 "traddr": "10.0.0.1", 00:13:33.957 "trsvcid": "41224" 00:13:33.957 }, 00:13:33.957 "auth": { 00:13:33.957 "state": "completed", 00:13:33.957 "digest": "sha512", 00:13:33.957 "dhgroup": "ffdhe6144" 00:13:33.957 } 00:13:33.957 } 00:13:33.957 ]' 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.957 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.214 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:34.215 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:34.825 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.825 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:34.825 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.825 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.825 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.825 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.825 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:34.825 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.084 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.652 00:13:35.652 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.652 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.652 09:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.910 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.910 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.910 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.910 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.910 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.910 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.910 { 00:13:35.910 "cntlid": 131, 00:13:35.910 "qid": 0, 00:13:35.910 "state": "enabled", 00:13:35.910 "thread": "nvmf_tgt_poll_group_000", 00:13:35.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:35.910 "listen_address": { 00:13:35.910 "trtype": "TCP", 00:13:35.910 "adrfam": "IPv4", 00:13:35.910 "traddr": "10.0.0.3", 00:13:35.910 "trsvcid": "4420" 00:13:35.910 }, 00:13:35.910 "peer_address": { 00:13:35.910 "trtype": "TCP", 00:13:35.910 "adrfam": "IPv4", 00:13:35.910 "traddr": "10.0.0.1", 00:13:35.910 "trsvcid": "41264" 00:13:35.910 }, 00:13:35.910 "auth": { 00:13:35.910 "state": "completed", 00:13:35.910 "digest": "sha512", 00:13:35.910 "dhgroup": "ffdhe6144" 00:13:35.910 } 00:13:35.910 } 00:13:35.910 ]' 00:13:35.910 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.169 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.169 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.169 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:36.169 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.169 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.169 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.169 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.428 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:36.428 09:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:36.995 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.995 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:36.995 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.995 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.254 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.822 00:13:37.822 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.822 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.822 09:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.082 { 00:13:38.082 "cntlid": 133, 00:13:38.082 "qid": 0, 00:13:38.082 "state": "enabled", 00:13:38.082 "thread": "nvmf_tgt_poll_group_000", 00:13:38.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:38.082 "listen_address": { 00:13:38.082 "trtype": "TCP", 00:13:38.082 "adrfam": "IPv4", 00:13:38.082 "traddr": "10.0.0.3", 00:13:38.082 "trsvcid": "4420" 00:13:38.082 }, 00:13:38.082 "peer_address": { 00:13:38.082 "trtype": "TCP", 00:13:38.082 "adrfam": "IPv4", 00:13:38.082 "traddr": "10.0.0.1", 00:13:38.082 "trsvcid": "41280" 00:13:38.082 }, 00:13:38.082 "auth": { 00:13:38.082 "state": "completed", 00:13:38.082 "digest": "sha512", 00:13:38.082 "dhgroup": "ffdhe6144" 00:13:38.082 } 00:13:38.082 } 00:13:38.082 ]' 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.082 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.341 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:38.341 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.341 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.341 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.341 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.600 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:38.600 09:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:39.168 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.168 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:39.168 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.168 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.168 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.168 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.168 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:39.168 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.427 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.996 00:13:39.996 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.996 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.996 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.255 { 00:13:40.255 "cntlid": 135, 00:13:40.255 "qid": 0, 00:13:40.255 "state": "enabled", 00:13:40.255 "thread": "nvmf_tgt_poll_group_000", 00:13:40.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:40.255 "listen_address": { 00:13:40.255 "trtype": "TCP", 00:13:40.255 "adrfam": "IPv4", 00:13:40.255 "traddr": "10.0.0.3", 00:13:40.255 "trsvcid": "4420" 00:13:40.255 }, 00:13:40.255 "peer_address": { 00:13:40.255 "trtype": "TCP", 00:13:40.255 "adrfam": "IPv4", 00:13:40.255 "traddr": "10.0.0.1", 00:13:40.255 "trsvcid": "59756" 00:13:40.255 }, 00:13:40.255 "auth": { 00:13:40.255 "state": "completed", 00:13:40.255 "digest": "sha512", 00:13:40.255 "dhgroup": "ffdhe6144" 00:13:40.255 } 00:13:40.255 } 00:13:40.255 ]' 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:40.255 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.513 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.513 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.513 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.771 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:40.771 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:41.336 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.594 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.531 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.531 { 00:13:42.531 "cntlid": 137, 00:13:42.531 "qid": 0, 00:13:42.531 "state": "enabled", 00:13:42.531 "thread": "nvmf_tgt_poll_group_000", 00:13:42.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:42.531 "listen_address": { 00:13:42.531 "trtype": "TCP", 00:13:42.531 "adrfam": "IPv4", 00:13:42.531 "traddr": "10.0.0.3", 00:13:42.531 "trsvcid": "4420" 00:13:42.531 }, 00:13:42.531 "peer_address": { 00:13:42.531 "trtype": "TCP", 00:13:42.531 "adrfam": "IPv4", 00:13:42.531 "traddr": "10.0.0.1", 00:13:42.531 "trsvcid": "59778" 00:13:42.531 }, 00:13:42.531 "auth": { 00:13:42.531 "state": "completed", 00:13:42.531 "digest": "sha512", 00:13:42.531 "dhgroup": "ffdhe8192" 00:13:42.531 } 00:13:42.531 } 00:13:42.531 ]' 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:42.531 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.791 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:42.791 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.791 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.791 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.791 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.050 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:43.050 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:43.987 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.988 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:43.988 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.988 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.988 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.988 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.988 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:43.988 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.988 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.556 00:13:44.557 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.557 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.557 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.126 { 00:13:45.126 "cntlid": 139, 00:13:45.126 "qid": 0, 00:13:45.126 "state": "enabled", 00:13:45.126 "thread": "nvmf_tgt_poll_group_000", 00:13:45.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:45.126 "listen_address": { 00:13:45.126 "trtype": "TCP", 00:13:45.126 "adrfam": "IPv4", 00:13:45.126 "traddr": "10.0.0.3", 00:13:45.126 "trsvcid": "4420" 00:13:45.126 }, 00:13:45.126 "peer_address": { 00:13:45.126 "trtype": "TCP", 00:13:45.126 "adrfam": "IPv4", 00:13:45.126 "traddr": "10.0.0.1", 00:13:45.126 "trsvcid": "59800" 00:13:45.126 }, 00:13:45.126 "auth": { 00:13:45.126 "state": "completed", 00:13:45.126 "digest": "sha512", 00:13:45.126 "dhgroup": "ffdhe8192" 00:13:45.126 } 00:13:45.126 } 00:13:45.126 ]' 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.126 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.401 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:45.401 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: --dhchap-ctrl-secret DHHC-1:02:YmI1YmE4MGIzNDU0NTQ1NTk2YzQyMDVmMDE0NDg0NWY1MjYzYTFmMWY2NmM3YzE3ilgsEw==: 00:13:45.981 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.981 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:45.981 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.981 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.981 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.981 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.981 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:45.981 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.240 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.177 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.177 { 00:13:47.177 "cntlid": 141, 00:13:47.177 "qid": 0, 00:13:47.177 "state": "enabled", 00:13:47.177 "thread": "nvmf_tgt_poll_group_000", 00:13:47.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:47.177 "listen_address": { 00:13:47.177 "trtype": "TCP", 00:13:47.177 "adrfam": "IPv4", 00:13:47.177 "traddr": "10.0.0.3", 00:13:47.177 "trsvcid": "4420" 00:13:47.177 }, 00:13:47.177 "peer_address": { 00:13:47.177 "trtype": "TCP", 00:13:47.177 "adrfam": "IPv4", 00:13:47.177 "traddr": "10.0.0.1", 00:13:47.177 "trsvcid": "59824" 00:13:47.177 }, 00:13:47.177 "auth": { 00:13:47.177 "state": "completed", 00:13:47.177 "digest": "sha512", 00:13:47.177 "dhgroup": "ffdhe8192" 00:13:47.177 } 00:13:47.177 } 00:13:47.177 ]' 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.177 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.436 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:47.436 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.436 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.436 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.436 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.695 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:47.695 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:01:MTJjZTU3YjQ4NzFlZjRmMmYxZDM1MzZlMjMyZDIxNjTWc7wa: 00:13:48.263 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.263 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:48.263 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.263 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.263 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.263 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.263 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:48.263 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:48.522 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:48.522 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.522 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.523 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.092 00:13:49.092 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.092 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.092 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.351 { 00:13:49.351 "cntlid": 143, 00:13:49.351 "qid": 0, 00:13:49.351 "state": "enabled", 00:13:49.351 "thread": "nvmf_tgt_poll_group_000", 00:13:49.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:49.351 "listen_address": { 00:13:49.351 "trtype": "TCP", 00:13:49.351 "adrfam": "IPv4", 00:13:49.351 "traddr": "10.0.0.3", 00:13:49.351 "trsvcid": "4420" 00:13:49.351 }, 00:13:49.351 "peer_address": { 00:13:49.351 "trtype": "TCP", 00:13:49.351 "adrfam": "IPv4", 00:13:49.351 "traddr": "10.0.0.1", 00:13:49.351 "trsvcid": "59856" 00:13:49.351 }, 00:13:49.351 "auth": { 00:13:49.351 "state": "completed", 00:13:49.351 "digest": "sha512", 00:13:49.351 "dhgroup": "ffdhe8192" 00:13:49.351 } 00:13:49.351 } 00:13:49.351 ]' 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.351 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.611 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.611 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.611 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.870 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:49.870 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:50.436 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.694 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.259 00:13:51.259 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.259 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.259 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.518 { 00:13:51.518 "cntlid": 145, 00:13:51.518 "qid": 0, 00:13:51.518 "state": "enabled", 00:13:51.518 "thread": "nvmf_tgt_poll_group_000", 00:13:51.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:51.518 "listen_address": { 00:13:51.518 "trtype": "TCP", 00:13:51.518 "adrfam": "IPv4", 00:13:51.518 "traddr": "10.0.0.3", 00:13:51.518 "trsvcid": "4420" 00:13:51.518 }, 00:13:51.518 "peer_address": { 00:13:51.518 "trtype": "TCP", 00:13:51.518 "adrfam": "IPv4", 00:13:51.518 "traddr": "10.0.0.1", 00:13:51.518 "trsvcid": "46912" 00:13:51.518 }, 00:13:51.518 "auth": { 00:13:51.518 "state": "completed", 00:13:51.518 "digest": "sha512", 00:13:51.518 "dhgroup": "ffdhe8192" 00:13:51.518 } 00:13:51.518 } 00:13:51.518 ]' 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.518 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.777 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:51.777 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:00:NDQ3NGZhMzBjYzM5YjgwMmVmYzRjNTUxMTQ0ZjEyYzM3ODk0NzRkNDYxMzUzNTE18ttuoA==: --dhchap-ctrl-secret DHHC-1:03:YjJjZjQzMDlmMTk2MWM1MTU1MGZhNjE0NTA1YWJlZjI4NDVjM2U2YmNhNmI0YTFlMGFlNzg2OGM2YTQ3NGE1OWdh3Iw=: 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:52.711 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:53.278 request: 00:13:53.278 { 00:13:53.278 "name": "nvme0", 00:13:53.278 "trtype": "tcp", 00:13:53.278 "traddr": "10.0.0.3", 00:13:53.278 "adrfam": "ipv4", 00:13:53.278 "trsvcid": "4420", 00:13:53.278 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:53.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:53.279 "prchk_reftag": false, 00:13:53.279 "prchk_guard": false, 00:13:53.279 "hdgst": false, 00:13:53.279 "ddgst": false, 00:13:53.279 "dhchap_key": "key2", 00:13:53.279 "allow_unrecognized_csi": false, 00:13:53.279 "method": "bdev_nvme_attach_controller", 00:13:53.279 "req_id": 1 00:13:53.279 } 00:13:53.279 Got JSON-RPC error response 00:13:53.279 response: 00:13:53.279 { 00:13:53.279 "code": -5, 00:13:53.279 "message": "Input/output error" 00:13:53.279 } 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:53.279 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:53.846 request: 00:13:53.846 { 00:13:53.846 "name": "nvme0", 00:13:53.846 "trtype": "tcp", 00:13:53.846 "traddr": "10.0.0.3", 00:13:53.846 "adrfam": "ipv4", 00:13:53.846 "trsvcid": "4420", 00:13:53.846 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:53.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:53.847 "prchk_reftag": false, 00:13:53.847 "prchk_guard": false, 00:13:53.847 "hdgst": false, 00:13:53.847 "ddgst": false, 00:13:53.847 "dhchap_key": "key1", 00:13:53.847 "dhchap_ctrlr_key": "ckey2", 00:13:53.847 "allow_unrecognized_csi": false, 00:13:53.847 "method": "bdev_nvme_attach_controller", 00:13:53.847 "req_id": 1 00:13:53.847 } 00:13:53.847 Got JSON-RPC error response 00:13:53.847 response: 00:13:53.847 { 00:13:53.847 "code": -5, 00:13:53.847 "message": "Input/output error" 00:13:53.847 } 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.847 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.414 request: 00:13:54.414 { 00:13:54.414 "name": "nvme0", 00:13:54.414 "trtype": "tcp", 00:13:54.414 "traddr": "10.0.0.3", 00:13:54.414 "adrfam": "ipv4", 00:13:54.414 "trsvcid": "4420", 00:13:54.414 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:54.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:54.414 "prchk_reftag": false, 00:13:54.414 "prchk_guard": false, 00:13:54.414 "hdgst": false, 00:13:54.414 "ddgst": false, 00:13:54.414 "dhchap_key": "key1", 00:13:54.414 "dhchap_ctrlr_key": "ckey1", 00:13:54.414 "allow_unrecognized_csi": false, 00:13:54.414 "method": "bdev_nvme_attach_controller", 00:13:54.414 "req_id": 1 00:13:54.414 } 00:13:54.414 Got JSON-RPC error response 00:13:54.414 response: 00:13:54.414 { 00:13:54.414 "code": -5, 00:13:54.414 "message": "Input/output error" 00:13:54.414 } 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67165 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67165 ']' 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67165 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67165 00:13:54.414 killing process with pid 67165 00:13:54.415 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.415 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.415 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67165' 00:13:54.415 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67165 00:13:54.415 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67165 00:13:54.415 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70197 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70197 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70197 ']' 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.673 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.931 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.931 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:54.931 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:54.931 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:54.931 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:54.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70197 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70197 ']' 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.931 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 null0 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Sle 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Lk8 ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Lk8 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.o1v 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ovk ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ovk 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2DG 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Xrv ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xrv 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1rI 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.190 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.449 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.384 nvme0n1 00:13:56.384 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.384 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.384 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.643 { 00:13:56.643 "cntlid": 1, 00:13:56.643 "qid": 0, 00:13:56.643 "state": "enabled", 00:13:56.643 "thread": "nvmf_tgt_poll_group_000", 00:13:56.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:56.643 "listen_address": { 00:13:56.643 "trtype": "TCP", 00:13:56.643 "adrfam": "IPv4", 00:13:56.643 "traddr": "10.0.0.3", 00:13:56.643 "trsvcid": "4420" 00:13:56.643 }, 00:13:56.643 "peer_address": { 00:13:56.643 "trtype": "TCP", 00:13:56.643 "adrfam": "IPv4", 00:13:56.643 "traddr": "10.0.0.1", 00:13:56.643 "trsvcid": "46948" 00:13:56.643 }, 00:13:56.643 "auth": { 00:13:56.643 "state": "completed", 00:13:56.643 "digest": "sha512", 00:13:56.643 "dhgroup": "ffdhe8192" 00:13:56.643 } 00:13:56.643 } 00:13:56.643 ]' 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.643 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.212 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:57.212 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key3 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:57.781 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.040 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.299 request: 00:13:58.299 { 00:13:58.299 "name": "nvme0", 00:13:58.299 "trtype": "tcp", 00:13:58.299 "traddr": "10.0.0.3", 00:13:58.299 "adrfam": "ipv4", 00:13:58.299 "trsvcid": "4420", 00:13:58.299 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:58.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:58.299 "prchk_reftag": false, 00:13:58.299 "prchk_guard": false, 00:13:58.299 "hdgst": false, 00:13:58.299 "ddgst": false, 00:13:58.299 "dhchap_key": "key3", 00:13:58.299 "allow_unrecognized_csi": false, 00:13:58.299 "method": "bdev_nvme_attach_controller", 00:13:58.299 "req_id": 1 00:13:58.299 } 00:13:58.299 Got JSON-RPC error response 00:13:58.299 response: 00:13:58.299 { 00:13:58.299 "code": -5, 00:13:58.299 "message": "Input/output error" 00:13:58.299 } 00:13:58.299 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:58.299 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.299 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.299 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.299 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:58.299 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:58.299 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:58.299 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.559 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.818 request: 00:13:58.818 { 00:13:58.818 "name": "nvme0", 00:13:58.818 "trtype": "tcp", 00:13:58.818 "traddr": "10.0.0.3", 00:13:58.818 "adrfam": "ipv4", 00:13:58.818 "trsvcid": "4420", 00:13:58.818 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:58.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:58.818 "prchk_reftag": false, 00:13:58.818 "prchk_guard": false, 00:13:58.818 "hdgst": false, 00:13:58.818 "ddgst": false, 00:13:58.818 "dhchap_key": "key3", 00:13:58.818 "allow_unrecognized_csi": false, 00:13:58.818 "method": "bdev_nvme_attach_controller", 00:13:58.818 "req_id": 1 00:13:58.818 } 00:13:58.818 Got JSON-RPC error response 00:13:58.818 response: 00:13:58.818 { 00:13:58.818 "code": -5, 00:13:58.818 "message": "Input/output error" 00:13:58.818 } 00:13:58.818 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:58.818 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.819 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.819 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.819 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:58.819 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:58.819 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:58.819 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:58.819 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:58.819 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:59.080 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:59.339 request: 00:13:59.339 { 00:13:59.339 "name": "nvme0", 00:13:59.339 "trtype": "tcp", 00:13:59.339 "traddr": "10.0.0.3", 00:13:59.339 "adrfam": "ipv4", 00:13:59.339 "trsvcid": "4420", 00:13:59.339 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:59.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:13:59.339 "prchk_reftag": false, 00:13:59.339 "prchk_guard": false, 00:13:59.339 "hdgst": false, 00:13:59.339 "ddgst": false, 00:13:59.339 "dhchap_key": "key0", 00:13:59.339 "dhchap_ctrlr_key": "key1", 00:13:59.339 "allow_unrecognized_csi": false, 00:13:59.339 "method": "bdev_nvme_attach_controller", 00:13:59.339 "req_id": 1 00:13:59.339 } 00:13:59.339 Got JSON-RPC error response 00:13:59.339 response: 00:13:59.339 { 00:13:59.339 "code": -5, 00:13:59.339 "message": "Input/output error" 00:13:59.339 } 00:13:59.339 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:59.339 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:59.339 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:59.339 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:59.339 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:59.339 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:59.339 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:59.916 nvme0n1 00:13:59.916 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:59.916 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.916 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:00.175 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.175 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.175 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.435 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 00:14:00.435 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.435 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.435 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.435 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:00.435 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:00.435 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:01.371 nvme0n1 00:14:01.371 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:01.371 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:01.371 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.630 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.630 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:01.630 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.630 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.630 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.630 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:01.630 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.630 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:01.889 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.889 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:14:01.889 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid 8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -l 0 --dhchap-secret DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: --dhchap-ctrl-secret DHHC-1:03:N2MxYzY4OWY1NTJjNzk1NTlmZmI1OWQ3MzQyNjI3YzU0MGViZTY1NmU0Yzk2MzY3NjZmZTE2MDQzNmY2MjgzMFH2Qo0=: 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.458 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:02.717 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:03.285 request: 00:14:03.285 { 00:14:03.285 "name": "nvme0", 00:14:03.285 "trtype": "tcp", 00:14:03.285 "traddr": "10.0.0.3", 00:14:03.285 "adrfam": "ipv4", 00:14:03.285 "trsvcid": "4420", 00:14:03.285 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:03.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7", 00:14:03.285 "prchk_reftag": false, 00:14:03.285 "prchk_guard": false, 00:14:03.285 "hdgst": false, 00:14:03.285 "ddgst": false, 00:14:03.285 "dhchap_key": "key1", 00:14:03.285 "allow_unrecognized_csi": false, 00:14:03.285 "method": "bdev_nvme_attach_controller", 00:14:03.285 "req_id": 1 00:14:03.285 } 00:14:03.285 Got JSON-RPC error response 00:14:03.285 response: 00:14:03.285 { 00:14:03.285 "code": -5, 00:14:03.285 "message": "Input/output error" 00:14:03.285 } 00:14:03.543 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:03.543 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:03.543 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:03.543 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:03.543 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:03.543 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:03.543 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:04.517 nvme0n1 00:14:04.517 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:04.517 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:04.517 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.776 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.776 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.776 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.035 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:14:05.035 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.035 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.035 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.035 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:05.035 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:05.035 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:05.294 nvme0n1 00:14:05.294 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:05.294 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.294 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:05.553 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.553 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.553 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.812 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:05.812 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.812 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.812 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.812 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: '' 2s 00:14:05.812 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: ]] 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmU0NDE0MjlmMDc4MGJlYTczZDg5YWY5MjRkZDVhMGToiSZf: 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:05.813 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:07.716 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:07.716 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:07.716 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:07.716 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:07.716 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:07.716 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: 2s 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: ]] 00:14:07.975 09:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTc2NTlmZTNlOWQ4ZTA4NWZjZDAyOWViMjEwZjk5MGVmNTVjMjU5MDhjYTlhMzYyF3mKKA==: 00:14:07.975 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:07.975 09:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.881 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.882 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.882 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.882 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.882 09:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:10.817 nvme0n1 00:14:10.817 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:10.817 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.817 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.817 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.817 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:10.817 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:11.385 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:11.385 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.385 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:11.644 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.644 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:14:11.644 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.644 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.644 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.644 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:11.644 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:11.902 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:11.902 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.902 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:12.160 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:12.727 request: 00:14:12.727 { 00:14:12.727 "name": "nvme0", 00:14:12.727 "dhchap_key": "key1", 00:14:12.727 "dhchap_ctrlr_key": "key3", 00:14:12.727 "method": "bdev_nvme_set_keys", 00:14:12.727 "req_id": 1 00:14:12.727 } 00:14:12.727 Got JSON-RPC error response 00:14:12.727 response: 00:14:12.727 { 00:14:12.727 "code": -13, 00:14:12.727 "message": "Permission denied" 00:14:12.727 } 00:14:12.727 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:12.727 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.727 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.727 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.727 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:12.727 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.727 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:12.986 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:12.986 09:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:14.395 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:15.331 nvme0n1 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:15.331 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:15.898 request: 00:14:15.898 { 00:14:15.898 "name": "nvme0", 00:14:15.898 "dhchap_key": "key2", 00:14:15.898 "dhchap_ctrlr_key": "key0", 00:14:15.898 "method": "bdev_nvme_set_keys", 00:14:15.898 "req_id": 1 00:14:15.898 } 00:14:15.898 Got JSON-RPC error response 00:14:15.898 response: 00:14:15.898 { 00:14:15.898 "code": -13, 00:14:15.898 "message": "Permission denied" 00:14:15.898 } 00:14:15.898 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:15.898 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.898 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.898 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.898 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:15.898 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.898 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:16.156 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:16.156 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:17.131 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:17.131 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.131 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67190 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67190 ']' 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67190 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67190 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:17.389 killing process with pid 67190 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67190' 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67190 00:14:17.389 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67190 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.955 rmmod nvme_tcp 00:14:17.955 rmmod nvme_fabrics 00:14:17.955 rmmod nvme_keyring 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:17.955 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70197 ']' 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70197 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70197 ']' 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70197 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70197 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.956 killing process with pid 70197 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70197' 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70197 00:14:17.956 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70197 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:18.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Sle /tmp/spdk.key-sha256.o1v /tmp/spdk.key-sha384.2DG /tmp/spdk.key-sha512.1rI /tmp/spdk.key-sha512.Lk8 /tmp/spdk.key-sha384.ovk /tmp/spdk.key-sha256.Xrv '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:18.473 00:14:18.473 real 3m5.650s 00:14:18.473 user 7m23.068s 00:14:18.473 sys 0m29.701s 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.473 ************************************ 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.473 END TEST nvmf_auth_target 00:14:18.473 ************************************ 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.473 09:51:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.473 ************************************ 00:14:18.473 START TEST nvmf_bdevio_no_huge 00:14:18.473 ************************************ 00:14:18.474 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:18.734 * Looking for test storage... 00:14:18.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:18.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.734 --rc genhtml_branch_coverage=1 00:14:18.734 --rc genhtml_function_coverage=1 00:14:18.734 --rc genhtml_legend=1 00:14:18.734 --rc geninfo_all_blocks=1 00:14:18.734 --rc geninfo_unexecuted_blocks=1 00:14:18.734 00:14:18.734 ' 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:18.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.734 --rc genhtml_branch_coverage=1 00:14:18.734 --rc genhtml_function_coverage=1 00:14:18.734 --rc genhtml_legend=1 00:14:18.734 --rc geninfo_all_blocks=1 00:14:18.734 --rc geninfo_unexecuted_blocks=1 00:14:18.734 00:14:18.734 ' 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:18.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.734 --rc genhtml_branch_coverage=1 00:14:18.734 --rc genhtml_function_coverage=1 00:14:18.734 --rc genhtml_legend=1 00:14:18.734 --rc geninfo_all_blocks=1 00:14:18.734 --rc geninfo_unexecuted_blocks=1 00:14:18.734 00:14:18.734 ' 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:18.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.734 --rc genhtml_branch_coverage=1 00:14:18.734 --rc genhtml_function_coverage=1 00:14:18.734 --rc genhtml_legend=1 00:14:18.734 --rc geninfo_all_blocks=1 00:14:18.734 --rc geninfo_unexecuted_blocks=1 00:14:18.734 00:14:18.734 ' 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.734 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:18.735 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:18.735 Cannot find device "nvmf_init_br" 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:18.735 Cannot find device "nvmf_init_br2" 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:18.735 Cannot find device "nvmf_tgt_br" 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.735 Cannot find device "nvmf_tgt_br2" 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:18.735 Cannot find device "nvmf_init_br" 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:18.735 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:18.994 Cannot find device "nvmf_init_br2" 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:18.994 Cannot find device "nvmf_tgt_br" 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:18.994 Cannot find device "nvmf_tgt_br2" 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:18.994 Cannot find device "nvmf_br" 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:18.994 Cannot find device "nvmf_init_if" 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:18.994 Cannot find device "nvmf_init_if2" 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:18.994 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.253 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.253 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.253 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:19.253 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:19.253 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:19.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:19.254 00:14:19.254 --- 10.0.0.3 ping statistics --- 00:14:19.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.254 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:19.254 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:19.254 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:14:19.254 00:14:19.254 --- 10.0.0.4 ping statistics --- 00:14:19.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.254 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:19.254 00:14:19.254 --- 10.0.0.1 ping statistics --- 00:14:19.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.254 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:19.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:14:19.254 00:14:19.254 --- 10.0.0.2 ping statistics --- 00:14:19.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.254 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70831 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70831 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70831 ']' 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.254 09:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.254 [2024-12-06 09:51:44.392820] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:19.254 [2024-12-06 09:51:44.392896] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:19.513 [2024-12-06 09:51:44.551591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.513 [2024-12-06 09:51:44.633548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.513 [2024-12-06 09:51:44.633625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.513 [2024-12-06 09:51:44.633639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.513 [2024-12-06 09:51:44.633650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.513 [2024-12-06 09:51:44.633659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.513 [2024-12-06 09:51:44.634681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:19.513 [2024-12-06 09:51:44.634956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.513 [2024-12-06 09:51:44.634829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:19.513 [2024-12-06 09:51:44.634943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:19.513 [2024-12-06 09:51:44.641142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.449 [2024-12-06 09:51:45.399238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.449 Malloc0 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.449 [2024-12-06 09:51:45.443478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:20.449 { 00:14:20.449 "params": { 00:14:20.449 "name": "Nvme$subsystem", 00:14:20.449 "trtype": "$TEST_TRANSPORT", 00:14:20.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:20.449 "adrfam": "ipv4", 00:14:20.449 "trsvcid": "$NVMF_PORT", 00:14:20.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:20.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:20.449 "hdgst": ${hdgst:-false}, 00:14:20.449 "ddgst": ${ddgst:-false} 00:14:20.449 }, 00:14:20.449 "method": "bdev_nvme_attach_controller" 00:14:20.449 } 00:14:20.449 EOF 00:14:20.449 )") 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:20.449 09:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:20.449 "params": { 00:14:20.449 "name": "Nvme1", 00:14:20.449 "trtype": "tcp", 00:14:20.449 "traddr": "10.0.0.3", 00:14:20.449 "adrfam": "ipv4", 00:14:20.449 "trsvcid": "4420", 00:14:20.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.449 "hdgst": false, 00:14:20.449 "ddgst": false 00:14:20.449 }, 00:14:20.449 "method": "bdev_nvme_attach_controller" 00:14:20.449 }' 00:14:20.449 [2024-12-06 09:51:45.504060] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:20.449 [2024-12-06 09:51:45.504159] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70868 ] 00:14:20.449 [2024-12-06 09:51:45.667347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:20.708 [2024-12-06 09:51:45.749838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.708 [2024-12-06 09:51:45.749968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.708 [2024-12-06 09:51:45.749975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.708 [2024-12-06 09:51:45.764285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.968 I/O targets: 00:14:20.968 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:20.968 00:14:20.968 00:14:20.968 CUnit - A unit testing framework for C - Version 2.1-3 00:14:20.968 http://cunit.sourceforge.net/ 00:14:20.968 00:14:20.968 00:14:20.968 Suite: bdevio tests on: Nvme1n1 00:14:20.968 Test: blockdev write read block ...passed 00:14:20.968 Test: blockdev write zeroes read block ...passed 00:14:20.968 Test: blockdev write zeroes read no split ...passed 00:14:20.968 Test: blockdev write zeroes read split ...passed 00:14:20.968 Test: blockdev write zeroes read split partial ...passed 00:14:20.968 Test: blockdev reset ...[2024-12-06 09:51:46.034548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:20.968 [2024-12-06 09:51:46.034696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfae90 (9): Bad file descriptor 00:14:20.968 [2024-12-06 09:51:46.053437] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:20.968 passed 00:14:20.968 Test: blockdev write read 8 blocks ...passed 00:14:20.968 Test: blockdev write read size > 128k ...passed 00:14:20.968 Test: blockdev write read invalid size ...passed 00:14:20.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:20.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:20.968 Test: blockdev write read max offset ...passed 00:14:20.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:20.968 Test: blockdev writev readv 8 blocks ...passed 00:14:20.968 Test: blockdev writev readv 30 x 1block ...passed 00:14:20.968 Test: blockdev writev readv block ...passed 00:14:20.968 Test: blockdev writev readv size > 128k ...passed 00:14:20.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:20.968 Test: blockdev comparev and writev ...[2024-12-06 09:51:46.063044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.968 [2024-12-06 09:51:46.063085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.063106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.968 [2024-12-06 09:51:46.063118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.063880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.968 [2024-12-06 09:51:46.063908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.063929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.968 [2024-12-06 09:51:46.063938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.064321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.968 [2024-12-06 09:51:46.064348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.064365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.968 [2024-12-06 09:51:46.064376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.064970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.968 [2024-12-06 09:51:46.064997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.065016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.968 [2024-12-06 09:51:46.065027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:20.968 passed 00:14:20.968 Test: blockdev nvme passthru rw ...passed 00:14:20.968 Test: blockdev nvme passthru vendor specific ...[2024-12-06 09:51:46.066017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.968 [2024-12-06 09:51:46.066043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.066152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.968 [2024-12-06 09:51:46.066169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.066283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.968 [2024-12-06 09:51:46.066300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:20.968 [2024-12-06 09:51:46.066393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.968 [2024-12-06 09:51:46.066436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:20.968 passed 00:14:20.968 Test: blockdev nvme admin passthru ...passed 00:14:20.968 Test: blockdev copy ...passed 00:14:20.968 00:14:20.968 Run Summary: Type Total Ran Passed Failed Inactive 00:14:20.968 suites 1 1 n/a 0 0 00:14:20.968 tests 23 23 23 0 0 00:14:20.968 asserts 152 152 152 0 n/a 00:14:20.968 00:14:20.968 Elapsed time = 0.227 seconds 00:14:21.228 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.229 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.229 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:21.229 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.229 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:21.229 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:21.229 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:21.229 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:21.487 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:21.487 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:21.487 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:21.487 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:21.487 rmmod nvme_tcp 00:14:21.487 rmmod nvme_fabrics 00:14:21.487 rmmod nvme_keyring 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70831 ']' 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70831 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70831 ']' 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70831 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70831 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:21.488 killing process with pid 70831 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70831' 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70831 00:14:21.488 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70831 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:21.747 09:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:21.747 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:22.006 00:14:22.006 real 0m3.501s 00:14:22.006 user 0m10.751s 00:14:22.006 sys 0m1.395s 00:14:22.006 ************************************ 00:14:22.006 END TEST nvmf_bdevio_no_huge 00:14:22.006 ************************************ 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:22.006 09:51:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:22.007 09:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:22.007 09:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.007 09:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.279 ************************************ 00:14:22.279 START TEST nvmf_tls 00:14:22.279 ************************************ 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:22.279 * Looking for test storage... 00:14:22.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.279 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:22.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.280 --rc genhtml_branch_coverage=1 00:14:22.280 --rc genhtml_function_coverage=1 00:14:22.280 --rc genhtml_legend=1 00:14:22.280 --rc geninfo_all_blocks=1 00:14:22.280 --rc geninfo_unexecuted_blocks=1 00:14:22.280 00:14:22.280 ' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:22.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.280 --rc genhtml_branch_coverage=1 00:14:22.280 --rc genhtml_function_coverage=1 00:14:22.280 --rc genhtml_legend=1 00:14:22.280 --rc geninfo_all_blocks=1 00:14:22.280 --rc geninfo_unexecuted_blocks=1 00:14:22.280 00:14:22.280 ' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:22.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.280 --rc genhtml_branch_coverage=1 00:14:22.280 --rc genhtml_function_coverage=1 00:14:22.280 --rc genhtml_legend=1 00:14:22.280 --rc geninfo_all_blocks=1 00:14:22.280 --rc geninfo_unexecuted_blocks=1 00:14:22.280 00:14:22.280 ' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:22.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.280 --rc genhtml_branch_coverage=1 00:14:22.280 --rc genhtml_function_coverage=1 00:14:22.280 --rc genhtml_legend=1 00:14:22.280 --rc geninfo_all_blocks=1 00:14:22.280 --rc geninfo_unexecuted_blocks=1 00:14:22.280 00:14:22.280 ' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.280 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:22.280 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:22.281 Cannot find device "nvmf_init_br" 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:22.281 Cannot find device "nvmf_init_br2" 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:22.281 Cannot find device "nvmf_tgt_br" 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.281 Cannot find device "nvmf_tgt_br2" 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:22.281 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:22.554 Cannot find device "nvmf_init_br" 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:22.554 Cannot find device "nvmf_init_br2" 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:22.554 Cannot find device "nvmf_tgt_br" 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:22.554 Cannot find device "nvmf_tgt_br2" 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:22.554 Cannot find device "nvmf_br" 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:22.554 Cannot find device "nvmf_init_if" 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:22.554 Cannot find device "nvmf_init_if2" 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:22.554 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:22.555 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:22.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:22.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:14:22.814 00:14:22.814 --- 10.0.0.3 ping statistics --- 00:14:22.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.814 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:22.814 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:22.814 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.123 ms 00:14:22.814 00:14:22.814 --- 10.0.0.4 ping statistics --- 00:14:22.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.814 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:22.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:14:22.814 00:14:22.814 --- 10.0.0.1 ping statistics --- 00:14:22.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.814 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:22.814 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:22.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:14:22.814 00:14:22.814 --- 10.0.0.2 ping statistics --- 00:14:22.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.814 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71102 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71102 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71102 ']' 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.815 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.815 [2024-12-06 09:51:47.978051] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:22.815 [2024-12-06 09:51:47.978800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.075 [2024-12-06 09:51:48.134023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.075 [2024-12-06 09:51:48.210948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.075 [2024-12-06 09:51:48.211068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.075 [2024-12-06 09:51:48.211089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.075 [2024-12-06 09:51:48.211107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.075 [2024-12-06 09:51:48.211120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.075 [2024-12-06 09:51:48.211720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.075 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.075 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:23.075 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.075 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.075 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.075 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.075 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:23.075 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:23.334 true 00:14:23.334 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:23.334 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:23.592 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:23.593 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:23.593 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:23.852 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:23.852 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:24.111 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:24.111 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:24.111 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:24.370 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:24.370 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:24.629 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:24.629 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:24.629 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:24.629 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:24.888 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:24.888 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:24.888 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:25.146 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:25.146 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:25.407 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:25.407 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:25.407 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:25.666 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:25.666 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.mj9UsBQ8gX 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.TJwBmwkE8z 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.mj9UsBQ8gX 00:14:25.926 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.TJwBmwkE8z 00:14:26.185 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:26.185 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:26.761 [2024-12-06 09:51:51.731652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.761 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.mj9UsBQ8gX 00:14:26.761 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mj9UsBQ8gX 00:14:26.761 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:26.761 [2024-12-06 09:51:52.007205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.761 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:27.020 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:27.278 [2024-12-06 09:51:52.539275] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:27.278 [2024-12-06 09:51:52.539610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.537 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:27.537 malloc0 00:14:27.537 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:27.796 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mj9UsBQ8gX 00:14:28.056 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:28.315 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.mj9UsBQ8gX 00:14:40.573 Initializing NVMe Controllers 00:14:40.573 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.573 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:40.573 Initialization complete. Launching workers. 00:14:40.573 ======================================================== 00:14:40.573 Latency(us) 00:14:40.573 Device Information : IOPS MiB/s Average min max 00:14:40.573 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9828.68 38.39 6513.30 1470.50 11234.70 00:14:40.573 ======================================================== 00:14:40.573 Total : 9828.68 38.39 6513.30 1470.50 11234.70 00:14:40.573 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mj9UsBQ8gX 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mj9UsBQ8gX 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71327 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71327 /var/tmp/bdevperf.sock 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71327 ']' 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.573 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.573 [2024-12-06 09:52:03.740873] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:40.573 [2024-12-06 09:52:03.741937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71327 ] 00:14:40.573 [2024-12-06 09:52:03.895687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.573 [2024-12-06 09:52:03.963891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.573 [2024-12-06 09:52:04.021238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.573 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.573 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:40.573 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mj9UsBQ8gX 00:14:40.573 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:40.573 [2024-12-06 09:52:05.248772] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.573 TLSTESTn1 00:14:40.573 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:40.574 Running I/O for 10 seconds... 00:14:42.447 4402.00 IOPS, 17.20 MiB/s [2024-12-06T09:52:08.657Z] 4496.50 IOPS, 17.56 MiB/s [2024-12-06T09:52:09.594Z] 4457.00 IOPS, 17.41 MiB/s [2024-12-06T09:52:10.532Z] 4482.75 IOPS, 17.51 MiB/s [2024-12-06T09:52:11.477Z] 4493.40 IOPS, 17.55 MiB/s [2024-12-06T09:52:12.856Z] 4501.67 IOPS, 17.58 MiB/s [2024-12-06T09:52:13.795Z] 4511.86 IOPS, 17.62 MiB/s [2024-12-06T09:52:14.749Z] 4480.75 IOPS, 17.50 MiB/s [2024-12-06T09:52:15.684Z] 4423.44 IOPS, 17.28 MiB/s [2024-12-06T09:52:15.684Z] 4432.10 IOPS, 17.31 MiB/s 00:14:50.412 Latency(us) 00:14:50.412 [2024-12-06T09:52:15.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.412 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:50.412 Verification LBA range: start 0x0 length 0x2000 00:14:50.412 TLSTESTn1 : 10.02 4437.27 17.33 0.00 0.00 28793.61 6613.18 26810.18 00:14:50.412 [2024-12-06T09:52:15.684Z] =================================================================================================================== 00:14:50.412 [2024-12-06T09:52:15.684Z] Total : 4437.27 17.33 0.00 0.00 28793.61 6613.18 26810.18 00:14:50.412 { 00:14:50.412 "results": [ 00:14:50.412 { 00:14:50.412 "job": "TLSTESTn1", 00:14:50.412 "core_mask": "0x4", 00:14:50.412 "workload": "verify", 00:14:50.412 "status": "finished", 00:14:50.412 "verify_range": { 00:14:50.412 "start": 0, 00:14:50.412 "length": 8192 00:14:50.412 }, 00:14:50.412 "queue_depth": 128, 00:14:50.412 "io_size": 4096, 00:14:50.412 "runtime": 10.016975, 00:14:50.412 "iops": 4437.26773801472, 00:14:50.412 "mibps": 17.33307710162, 00:14:50.412 "io_failed": 0, 00:14:50.412 "io_timeout": 0, 00:14:50.412 "avg_latency_us": 28793.613990444403, 00:14:50.413 "min_latency_us": 6613.178181818182, 00:14:50.413 "max_latency_us": 26810.18181818182 00:14:50.413 } 00:14:50.413 ], 00:14:50.413 "core_count": 1 00:14:50.413 } 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71327 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71327 ']' 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71327 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71327 00:14:50.413 killing process with pid 71327 00:14:50.413 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.413 00:14:50.413 Latency(us) 00:14:50.413 [2024-12-06T09:52:15.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.413 [2024-12-06T09:52:15.685Z] =================================================================================================================== 00:14:50.413 [2024-12-06T09:52:15.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71327' 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71327 00:14:50.413 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71327 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TJwBmwkE8z 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TJwBmwkE8z 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TJwBmwkE8z 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TJwBmwkE8z 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71464 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71464 /var/tmp/bdevperf.sock 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71464 ']' 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.671 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.671 [2024-12-06 09:52:15.795615] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:50.671 [2024-12-06 09:52:15.795999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71464 ] 00:14:50.671 [2024-12-06 09:52:15.936733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.929 [2024-12-06 09:52:15.989310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.929 [2024-12-06 09:52:16.044697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.497 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.497 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:51.497 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TJwBmwkE8z 00:14:51.757 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:52.016 [2024-12-06 09:52:17.248460] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.016 [2024-12-06 09:52:17.257734] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:52.016 [2024-12-06 09:52:17.258124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140b030 (107): Transport endpoint is not connected 00:14:52.016 [2024-12-06 09:52:17.259116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140b030 (9): Bad file descriptor 00:14:52.016 [2024-12-06 09:52:17.260111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:52.016 [2024-12-06 09:52:17.260132] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:52.016 [2024-12-06 09:52:17.260161] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:52.016 [2024-12-06 09:52:17.260178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:52.016 request: 00:14:52.016 { 00:14:52.016 "name": "TLSTEST", 00:14:52.016 "trtype": "tcp", 00:14:52.016 "traddr": "10.0.0.3", 00:14:52.016 "adrfam": "ipv4", 00:14:52.016 "trsvcid": "4420", 00:14:52.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.016 "prchk_reftag": false, 00:14:52.016 "prchk_guard": false, 00:14:52.016 "hdgst": false, 00:14:52.016 "ddgst": false, 00:14:52.016 "psk": "key0", 00:14:52.016 "allow_unrecognized_csi": false, 00:14:52.016 "method": "bdev_nvme_attach_controller", 00:14:52.017 "req_id": 1 00:14:52.017 } 00:14:52.017 Got JSON-RPC error response 00:14:52.017 response: 00:14:52.017 { 00:14:52.017 "code": -5, 00:14:52.017 "message": "Input/output error" 00:14:52.017 } 00:14:52.017 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71464 00:14:52.017 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71464 ']' 00:14:52.017 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71464 00:14:52.017 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:52.017 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.017 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71464 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:52.277 killing process with pid 71464 00:14:52.277 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.277 00:14:52.277 Latency(us) 00:14:52.277 [2024-12-06T09:52:17.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.277 [2024-12-06T09:52:17.549Z] =================================================================================================================== 00:14:52.277 [2024-12-06T09:52:17.549Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71464' 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71464 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71464 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mj9UsBQ8gX 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mj9UsBQ8gX 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mj9UsBQ8gX 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mj9UsBQ8gX 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71498 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71498 /var/tmp/bdevperf.sock 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71498 ']' 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.277 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.537 [2024-12-06 09:52:17.576442] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:52.537 [2024-12-06 09:52:17.576870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71498 ] 00:14:52.537 [2024-12-06 09:52:17.726760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.537 [2024-12-06 09:52:17.787696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.796 [2024-12-06 09:52:17.842850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.365 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.365 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:53.365 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mj9UsBQ8gX 00:14:53.624 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:53.884 [2024-12-06 09:52:19.000590] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.884 [2024-12-06 09:52:19.007825] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.884 [2024-12-06 09:52:19.007885] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.884 [2024-12-06 09:52:19.007974] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:53.884 [2024-12-06 09:52:19.008076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e9030 (107): Transport endpoint is not connected 00:14:53.884 [2024-12-06 09:52:19.009067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e9030 (9): Bad file descriptor 00:14:53.884 [2024-12-06 09:52:19.010064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:53.884 [2024-12-06 09:52:19.010082] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:53.884 [2024-12-06 09:52:19.010109] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:53.884 [2024-12-06 09:52:19.010123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:53.884 request: 00:14:53.884 { 00:14:53.884 "name": "TLSTEST", 00:14:53.884 "trtype": "tcp", 00:14:53.884 "traddr": "10.0.0.3", 00:14:53.884 "adrfam": "ipv4", 00:14:53.884 "trsvcid": "4420", 00:14:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:53.884 "prchk_reftag": false, 00:14:53.885 "prchk_guard": false, 00:14:53.885 "hdgst": false, 00:14:53.885 "ddgst": false, 00:14:53.885 "psk": "key0", 00:14:53.885 "allow_unrecognized_csi": false, 00:14:53.885 "method": "bdev_nvme_attach_controller", 00:14:53.885 "req_id": 1 00:14:53.885 } 00:14:53.885 Got JSON-RPC error response 00:14:53.885 response: 00:14:53.885 { 00:14:53.885 "code": -5, 00:14:53.885 "message": "Input/output error" 00:14:53.885 } 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71498 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71498 ']' 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71498 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71498 00:14:53.885 killing process with pid 71498 00:14:53.885 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.885 00:14:53.885 Latency(us) 00:14:53.885 [2024-12-06T09:52:19.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.885 [2024-12-06T09:52:19.157Z] =================================================================================================================== 00:14:53.885 [2024-12-06T09:52:19.157Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71498' 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71498 00:14:53.885 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71498 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mj9UsBQ8gX 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mj9UsBQ8gX 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mj9UsBQ8gX 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mj9UsBQ8gX 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71527 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71527 /var/tmp/bdevperf.sock 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71527 ']' 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.145 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.145 [2024-12-06 09:52:19.316233] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:54.145 [2024-12-06 09:52:19.316651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71527 ] 00:14:54.404 [2024-12-06 09:52:19.463969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.404 [2024-12-06 09:52:19.521803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.404 [2024-12-06 09:52:19.574441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.404 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.404 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:54.404 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mj9UsBQ8gX 00:14:54.988 09:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:54.988 [2024-12-06 09:52:20.164282] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.988 [2024-12-06 09:52:20.169105] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:54.988 [2024-12-06 09:52:20.169315] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:54.988 [2024-12-06 09:52:20.169416] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:54.988 [2024-12-06 09:52:20.169832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc12030 (107): Transport endpoint is not connected 00:14:54.988 [2024-12-06 09:52:20.170819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc12030 (9): Bad file descriptor 00:14:54.988 [2024-12-06 09:52:20.171816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:54.988 [2024-12-06 09:52:20.171840] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:54.988 [2024-12-06 09:52:20.171851] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:54.988 [2024-12-06 09:52:20.171866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:54.988 request: 00:14:54.988 { 00:14:54.988 "name": "TLSTEST", 00:14:54.988 "trtype": "tcp", 00:14:54.988 "traddr": "10.0.0.3", 00:14:54.988 "adrfam": "ipv4", 00:14:54.988 "trsvcid": "4420", 00:14:54.988 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:54.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.988 "prchk_reftag": false, 00:14:54.988 "prchk_guard": false, 00:14:54.988 "hdgst": false, 00:14:54.988 "ddgst": false, 00:14:54.988 "psk": "key0", 00:14:54.988 "allow_unrecognized_csi": false, 00:14:54.988 "method": "bdev_nvme_attach_controller", 00:14:54.988 "req_id": 1 00:14:54.988 } 00:14:54.988 Got JSON-RPC error response 00:14:54.988 response: 00:14:54.988 { 00:14:54.988 "code": -5, 00:14:54.988 "message": "Input/output error" 00:14:54.988 } 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71527 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71527 ']' 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71527 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71527 00:14:54.988 killing process with pid 71527 00:14:54.988 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.988 00:14:54.988 Latency(us) 00:14:54.988 [2024-12-06T09:52:20.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.988 [2024-12-06T09:52:20.260Z] =================================================================================================================== 00:14:54.988 [2024-12-06T09:52:20.260Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71527' 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71527 00:14:54.988 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71527 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71548 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71548 /var/tmp/bdevperf.sock 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71548 ']' 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.248 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.248 [2024-12-06 09:52:20.464444] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:55.248 [2024-12-06 09:52:20.465092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71548 ] 00:14:55.507 [2024-12-06 09:52:20.612177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.508 [2024-12-06 09:52:20.671335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.508 [2024-12-06 09:52:20.724138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:55.767 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.767 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:55.767 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:56.025 [2024-12-06 09:52:21.042163] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:56.025 [2024-12-06 09:52:21.042468] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:56.025 request: 00:14:56.025 { 00:14:56.025 "name": "key0", 00:14:56.025 "path": "", 00:14:56.025 "method": "keyring_file_add_key", 00:14:56.025 "req_id": 1 00:14:56.025 } 00:14:56.025 Got JSON-RPC error response 00:14:56.025 response: 00:14:56.025 { 00:14:56.025 "code": -1, 00:14:56.025 "message": "Operation not permitted" 00:14:56.025 } 00:14:56.025 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:56.284 [2024-12-06 09:52:21.330321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.284 [2024-12-06 09:52:21.330392] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:56.284 request: 00:14:56.284 { 00:14:56.284 "name": "TLSTEST", 00:14:56.284 "trtype": "tcp", 00:14:56.284 "traddr": "10.0.0.3", 00:14:56.284 "adrfam": "ipv4", 00:14:56.284 "trsvcid": "4420", 00:14:56.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.284 "prchk_reftag": false, 00:14:56.284 "prchk_guard": false, 00:14:56.284 "hdgst": false, 00:14:56.284 "ddgst": false, 00:14:56.284 "psk": "key0", 00:14:56.284 "allow_unrecognized_csi": false, 00:14:56.284 "method": "bdev_nvme_attach_controller", 00:14:56.284 "req_id": 1 00:14:56.284 } 00:14:56.284 Got JSON-RPC error response 00:14:56.284 response: 00:14:56.284 { 00:14:56.284 "code": -126, 00:14:56.284 "message": "Required key not available" 00:14:56.284 } 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71548 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71548 ']' 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71548 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71548 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71548' 00:14:56.284 killing process with pid 71548 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71548 00:14:56.284 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.284 00:14:56.284 Latency(us) 00:14:56.284 [2024-12-06T09:52:21.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.284 [2024-12-06T09:52:21.556Z] =================================================================================================================== 00:14:56.284 [2024-12-06T09:52:21.556Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.284 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71548 00:14:56.542 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:56.542 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:56.542 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71102 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71102 ']' 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71102 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71102 00:14:56.543 killing process with pid 71102 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71102' 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71102 00:14:56.543 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71102 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.kfJwOpD591 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.kfJwOpD591 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71583 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71583 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71583 ']' 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.802 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.802 [2024-12-06 09:52:21.980056] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:56.802 [2024-12-06 09:52:21.980130] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.062 [2024-12-06 09:52:22.117692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.062 [2024-12-06 09:52:22.173008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.062 [2024-12-06 09:52:22.173075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.062 [2024-12-06 09:52:22.173086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.062 [2024-12-06 09:52:22.173093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.062 [2024-12-06 09:52:22.173100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.062 [2024-12-06 09:52:22.173591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.062 [2024-12-06 09:52:22.243216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.062 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.062 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:57.062 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.062 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.062 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.321 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.321 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.kfJwOpD591 00:14:57.321 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kfJwOpD591 00:14:57.321 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:57.580 [2024-12-06 09:52:22.640863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.580 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:57.839 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:58.098 [2024-12-06 09:52:23.164934] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.098 [2024-12-06 09:52:23.165177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.098 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.356 malloc0 00:14:58.357 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:58.615 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:14:58.873 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kfJwOpD591 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kfJwOpD591 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71631 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71631 /var/tmp/bdevperf.sock 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71631 ']' 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.873 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.874 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.874 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.874 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.132 [2024-12-06 09:52:24.189322] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:59.132 [2024-12-06 09:52:24.189408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71631 ] 00:14:59.132 [2024-12-06 09:52:24.330426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.132 [2024-12-06 09:52:24.376927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.391 [2024-12-06 09:52:24.430449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:59.391 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.391 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:59.391 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:14:59.649 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:59.649 [2024-12-06 09:52:24.916954] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:59.908 TLSTESTn1 00:14:59.908 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:59.908 Running I/O for 10 seconds... 00:15:01.826 4510.00 IOPS, 17.62 MiB/s [2024-12-06T09:52:28.476Z] 4533.50 IOPS, 17.71 MiB/s [2024-12-06T09:52:29.415Z] 4533.00 IOPS, 17.71 MiB/s [2024-12-06T09:52:30.353Z] 4542.50 IOPS, 17.74 MiB/s [2024-12-06T09:52:31.300Z] 4548.60 IOPS, 17.77 MiB/s [2024-12-06T09:52:32.237Z] 4550.17 IOPS, 17.77 MiB/s [2024-12-06T09:52:33.175Z] 4552.71 IOPS, 17.78 MiB/s [2024-12-06T09:52:34.113Z] 4548.00 IOPS, 17.77 MiB/s [2024-12-06T09:52:35.493Z] 4499.56 IOPS, 17.58 MiB/s [2024-12-06T09:52:35.493Z] 4456.20 IOPS, 17.41 MiB/s 00:15:10.221 Latency(us) 00:15:10.221 [2024-12-06T09:52:35.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.221 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:10.221 Verification LBA range: start 0x0 length 0x2000 00:15:10.221 TLSTESTn1 : 10.02 4457.97 17.41 0.00 0.00 28649.41 5957.82 24069.59 00:15:10.221 [2024-12-06T09:52:35.493Z] =================================================================================================================== 00:15:10.221 [2024-12-06T09:52:35.493Z] Total : 4457.97 17.41 0.00 0.00 28649.41 5957.82 24069.59 00:15:10.221 { 00:15:10.221 "results": [ 00:15:10.221 { 00:15:10.221 "job": "TLSTESTn1", 00:15:10.221 "core_mask": "0x4", 00:15:10.221 "workload": "verify", 00:15:10.221 "status": "finished", 00:15:10.221 "verify_range": { 00:15:10.221 "start": 0, 00:15:10.221 "length": 8192 00:15:10.221 }, 00:15:10.221 "queue_depth": 128, 00:15:10.221 "io_size": 4096, 00:15:10.221 "runtime": 10.023834, 00:15:10.221 "iops": 4457.974862712212, 00:15:10.221 "mibps": 17.413964307469577, 00:15:10.221 "io_failed": 0, 00:15:10.221 "io_timeout": 0, 00:15:10.221 "avg_latency_us": 28649.407195094664, 00:15:10.221 "min_latency_us": 5957.818181818182, 00:15:10.221 "max_latency_us": 24069.585454545453 00:15:10.221 } 00:15:10.221 ], 00:15:10.221 "core_count": 1 00:15:10.221 } 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71631 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71631 ']' 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71631 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71631 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:10.221 killing process with pid 71631 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71631' 00:15:10.221 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.221 00:15:10.221 Latency(us) 00:15:10.221 [2024-12-06T09:52:35.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.221 [2024-12-06T09:52:35.493Z] =================================================================================================================== 00:15:10.221 [2024-12-06T09:52:35.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71631 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71631 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.kfJwOpD591 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kfJwOpD591 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kfJwOpD591 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kfJwOpD591 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kfJwOpD591 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71755 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71755 /var/tmp/bdevperf.sock 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71755 ']' 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.221 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.222 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.222 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.222 [2024-12-06 09:52:35.433187] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:10.222 [2024-12-06 09:52:35.433486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71755 ] 00:15:10.481 [2024-12-06 09:52:35.582651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.481 [2024-12-06 09:52:35.640722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.481 [2024-12-06 09:52:35.694997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:10.739 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.739 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:10.739 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:15:10.739 [2024-12-06 09:52:35.963753] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kfJwOpD591': 0100666 00:15:10.739 [2024-12-06 09:52:35.963803] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:10.739 request: 00:15:10.739 { 00:15:10.739 "name": "key0", 00:15:10.739 "path": "/tmp/tmp.kfJwOpD591", 00:15:10.739 "method": "keyring_file_add_key", 00:15:10.739 "req_id": 1 00:15:10.739 } 00:15:10.739 Got JSON-RPC error response 00:15:10.739 response: 00:15:10.739 { 00:15:10.739 "code": -1, 00:15:10.739 "message": "Operation not permitted" 00:15:10.739 } 00:15:10.739 09:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:10.997 [2024-12-06 09:52:36.247902] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.997 [2024-12-06 09:52:36.247974] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:10.997 request: 00:15:10.997 { 00:15:10.997 "name": "TLSTEST", 00:15:10.997 "trtype": "tcp", 00:15:10.997 "traddr": "10.0.0.3", 00:15:10.997 "adrfam": "ipv4", 00:15:10.997 "trsvcid": "4420", 00:15:10.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.997 "prchk_reftag": false, 00:15:10.997 "prchk_guard": false, 00:15:10.997 "hdgst": false, 00:15:10.997 "ddgst": false, 00:15:10.997 "psk": "key0", 00:15:10.997 "allow_unrecognized_csi": false, 00:15:10.997 "method": "bdev_nvme_attach_controller", 00:15:10.997 "req_id": 1 00:15:10.997 } 00:15:10.997 Got JSON-RPC error response 00:15:10.997 response: 00:15:10.997 { 00:15:10.997 "code": -126, 00:15:10.997 "message": "Required key not available" 00:15:10.997 } 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71755 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71755 ']' 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71755 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71755 00:15:11.256 killing process with pid 71755 00:15:11.256 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.256 00:15:11.256 Latency(us) 00:15:11.256 [2024-12-06T09:52:36.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.256 [2024-12-06T09:52:36.528Z] =================================================================================================================== 00:15:11.256 [2024-12-06T09:52:36.528Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71755' 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71755 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71755 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:11.256 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71583 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71583 ']' 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71583 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71583 00:15:11.257 killing process with pid 71583 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71583' 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71583 00:15:11.257 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71583 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71792 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71792 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71792 ']' 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.824 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.824 [2024-12-06 09:52:36.891183] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:11.824 [2024-12-06 09:52:36.891292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.824 [2024-12-06 09:52:37.036304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.824 [2024-12-06 09:52:37.091846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.824 [2024-12-06 09:52:37.091913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.824 [2024-12-06 09:52:37.091923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.824 [2024-12-06 09:52:37.091931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.824 [2024-12-06 09:52:37.091937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.824 [2024-12-06 09:52:37.092425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.084 [2024-12-06 09:52:37.164035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.kfJwOpD591 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.kfJwOpD591 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.kfJwOpD591 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kfJwOpD591 00:15:12.653 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:13.221 [2024-12-06 09:52:38.197752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.221 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:13.221 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:13.481 [2024-12-06 09:52:38.661769] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:13.481 [2024-12-06 09:52:38.662258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.481 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:13.741 malloc0 00:15:13.741 09:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:14.001 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:15:14.261 [2024-12-06 09:52:39.415701] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kfJwOpD591': 0100666 00:15:14.261 [2024-12-06 09:52:39.415756] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:14.261 request: 00:15:14.261 { 00:15:14.261 "name": "key0", 00:15:14.261 "path": "/tmp/tmp.kfJwOpD591", 00:15:14.261 "method": "keyring_file_add_key", 00:15:14.261 "req_id": 1 00:15:14.261 } 00:15:14.261 Got JSON-RPC error response 00:15:14.261 response: 00:15:14.261 { 00:15:14.261 "code": -1, 00:15:14.261 "message": "Operation not permitted" 00:15:14.261 } 00:15:14.261 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:14.521 [2024-12-06 09:52:39.639753] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:14.521 [2024-12-06 09:52:39.639803] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:14.521 request: 00:15:14.521 { 00:15:14.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.521 "host": "nqn.2016-06.io.spdk:host1", 00:15:14.521 "psk": "key0", 00:15:14.521 "method": "nvmf_subsystem_add_host", 00:15:14.521 "req_id": 1 00:15:14.521 } 00:15:14.521 Got JSON-RPC error response 00:15:14.521 response: 00:15:14.521 { 00:15:14.521 "code": -32603, 00:15:14.521 "message": "Internal error" 00:15:14.521 } 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71792 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71792 ']' 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71792 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71792 00:15:14.521 killing process with pid 71792 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71792' 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71792 00:15:14.521 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71792 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.kfJwOpD591 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71856 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71856 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71856 ']' 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.781 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.781 [2024-12-06 09:52:40.016780] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:14.781 [2024-12-06 09:52:40.016879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.041 [2024-12-06 09:52:40.151239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.041 [2024-12-06 09:52:40.199508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.041 [2024-12-06 09:52:40.199591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.041 [2024-12-06 09:52:40.199613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.041 [2024-12-06 09:52:40.199620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.041 [2024-12-06 09:52:40.199626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.041 [2024-12-06 09:52:40.200050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.041 [2024-12-06 09:52:40.273775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.kfJwOpD591 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kfJwOpD591 00:15:15.979 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:16.249 [2024-12-06 09:52:41.281887] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.249 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:16.524 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:16.783 [2024-12-06 09:52:41.818282] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:16.783 [2024-12-06 09:52:41.818893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:16.783 09:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:17.042 malloc0 00:15:17.042 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:17.301 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:15:17.560 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71917 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71917 /var/tmp/bdevperf.sock 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71917 ']' 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.819 09:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 [2024-12-06 09:52:42.889943] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:17.819 [2024-12-06 09:52:42.890491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71917 ] 00:15:17.819 [2024-12-06 09:52:43.039068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.078 [2024-12-06 09:52:43.102307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.078 [2024-12-06 09:52:43.159137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.078 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.078 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:18.078 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:15:18.338 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:18.598 [2024-12-06 09:52:43.653572] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.598 TLSTESTn1 00:15:18.598 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:18.858 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:18.858 "subsystems": [ 00:15:18.858 { 00:15:18.858 "subsystem": "keyring", 00:15:18.858 "config": [ 00:15:18.858 { 00:15:18.858 "method": "keyring_file_add_key", 00:15:18.858 "params": { 00:15:18.858 "name": "key0", 00:15:18.858 "path": "/tmp/tmp.kfJwOpD591" 00:15:18.858 } 00:15:18.858 } 00:15:18.858 ] 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "subsystem": "iobuf", 00:15:18.858 "config": [ 00:15:18.858 { 00:15:18.858 "method": "iobuf_set_options", 00:15:18.858 "params": { 00:15:18.858 "small_pool_count": 8192, 00:15:18.858 "large_pool_count": 1024, 00:15:18.858 "small_bufsize": 8192, 00:15:18.858 "large_bufsize": 135168, 00:15:18.858 "enable_numa": false 00:15:18.858 } 00:15:18.858 } 00:15:18.858 ] 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "subsystem": "sock", 00:15:18.858 "config": [ 00:15:18.858 { 00:15:18.858 "method": "sock_set_default_impl", 00:15:18.858 "params": { 00:15:18.858 "impl_name": "uring" 00:15:18.858 } 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "method": "sock_impl_set_options", 00:15:18.858 "params": { 00:15:18.858 "impl_name": "ssl", 00:15:18.858 "recv_buf_size": 4096, 00:15:18.858 "send_buf_size": 4096, 00:15:18.858 "enable_recv_pipe": true, 00:15:18.858 "enable_quickack": false, 00:15:18.858 "enable_placement_id": 0, 00:15:18.858 "enable_zerocopy_send_server": true, 00:15:18.858 "enable_zerocopy_send_client": false, 00:15:18.858 "zerocopy_threshold": 0, 00:15:18.858 "tls_version": 0, 00:15:18.858 "enable_ktls": false 00:15:18.858 } 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "method": "sock_impl_set_options", 00:15:18.858 "params": { 00:15:18.858 "impl_name": "posix", 00:15:18.858 "recv_buf_size": 2097152, 00:15:18.858 "send_buf_size": 2097152, 00:15:18.858 "enable_recv_pipe": true, 00:15:18.858 "enable_quickack": false, 00:15:18.858 "enable_placement_id": 0, 00:15:18.858 "enable_zerocopy_send_server": true, 00:15:18.858 "enable_zerocopy_send_client": false, 00:15:18.858 "zerocopy_threshold": 0, 00:15:18.858 "tls_version": 0, 00:15:18.858 "enable_ktls": false 00:15:18.858 } 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "method": "sock_impl_set_options", 00:15:18.858 "params": { 00:15:18.858 "impl_name": "uring", 00:15:18.858 "recv_buf_size": 2097152, 00:15:18.858 "send_buf_size": 2097152, 00:15:18.858 "enable_recv_pipe": true, 00:15:18.858 "enable_quickack": false, 00:15:18.858 "enable_placement_id": 0, 00:15:18.858 "enable_zerocopy_send_server": false, 00:15:18.858 "enable_zerocopy_send_client": false, 00:15:18.858 "zerocopy_threshold": 0, 00:15:18.858 "tls_version": 0, 00:15:18.858 "enable_ktls": false 00:15:18.858 } 00:15:18.858 } 00:15:18.858 ] 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "subsystem": "vmd", 00:15:18.858 "config": [] 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "subsystem": "accel", 00:15:18.858 "config": [ 00:15:18.858 { 00:15:18.858 "method": "accel_set_options", 00:15:18.858 "params": { 00:15:18.858 "small_cache_size": 128, 00:15:18.858 "large_cache_size": 16, 00:15:18.858 "task_count": 2048, 00:15:18.858 "sequence_count": 2048, 00:15:18.858 "buf_count": 2048 00:15:18.858 } 00:15:18.858 } 00:15:18.858 ] 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "subsystem": "bdev", 00:15:18.858 "config": [ 00:15:18.858 { 00:15:18.858 "method": "bdev_set_options", 00:15:18.858 "params": { 00:15:18.858 "bdev_io_pool_size": 65535, 00:15:18.858 "bdev_io_cache_size": 256, 00:15:18.858 "bdev_auto_examine": true, 00:15:18.858 "iobuf_small_cache_size": 128, 00:15:18.858 "iobuf_large_cache_size": 16 00:15:18.858 } 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "method": "bdev_raid_set_options", 00:15:18.858 "params": { 00:15:18.858 "process_window_size_kb": 1024, 00:15:18.858 "process_max_bandwidth_mb_sec": 0 00:15:18.858 } 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "method": "bdev_iscsi_set_options", 00:15:18.858 "params": { 00:15:18.858 "timeout_sec": 30 00:15:18.858 } 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "method": "bdev_nvme_set_options", 00:15:18.858 "params": { 00:15:18.858 "action_on_timeout": "none", 00:15:18.858 "timeout_us": 0, 00:15:18.858 "timeout_admin_us": 0, 00:15:18.858 "keep_alive_timeout_ms": 10000, 00:15:18.858 "arbitration_burst": 0, 00:15:18.858 "low_priority_weight": 0, 00:15:18.858 "medium_priority_weight": 0, 00:15:18.858 "high_priority_weight": 0, 00:15:18.858 "nvme_adminq_poll_period_us": 10000, 00:15:18.858 "nvme_ioq_poll_period_us": 0, 00:15:18.858 "io_queue_requests": 0, 00:15:18.858 "delay_cmd_submit": true, 00:15:18.858 "transport_retry_count": 4, 00:15:18.858 "bdev_retry_count": 3, 00:15:18.858 "transport_ack_timeout": 0, 00:15:18.858 "ctrlr_loss_timeout_sec": 0, 00:15:18.858 "reconnect_delay_sec": 0, 00:15:18.858 "fast_io_fail_timeout_sec": 0, 00:15:18.858 "disable_auto_failback": false, 00:15:18.858 "generate_uuids": false, 00:15:18.858 "transport_tos": 0, 00:15:18.858 "nvme_error_stat": false, 00:15:18.858 "rdma_srq_size": 0, 00:15:18.858 "io_path_stat": false, 00:15:18.858 "allow_accel_sequence": false, 00:15:18.858 "rdma_max_cq_size": 0, 00:15:18.858 "rdma_cm_event_timeout_ms": 0, 00:15:18.858 "dhchap_digests": [ 00:15:18.858 "sha256", 00:15:18.858 "sha384", 00:15:18.858 "sha512" 00:15:18.858 ], 00:15:18.858 "dhchap_dhgroups": [ 00:15:18.858 "null", 00:15:18.858 "ffdhe2048", 00:15:18.858 "ffdhe3072", 00:15:18.858 "ffdhe4096", 00:15:18.858 "ffdhe6144", 00:15:18.858 "ffdhe8192" 00:15:18.858 ] 00:15:18.858 } 00:15:18.858 }, 00:15:18.858 { 00:15:18.858 "method": "bdev_nvme_set_hotplug", 00:15:18.858 "params": { 00:15:18.858 "period_us": 100000, 00:15:18.858 "enable": false 00:15:18.858 } 00:15:18.858 }, 00:15:18.858 { 00:15:18.859 "method": "bdev_malloc_create", 00:15:18.859 "params": { 00:15:18.859 "name": "malloc0", 00:15:18.859 "num_blocks": 8192, 00:15:18.859 "block_size": 4096, 00:15:18.859 "physical_block_size": 4096, 00:15:18.859 "uuid": "1126b3e2-47d5-41d5-be6a-07a3feaaffae", 00:15:18.859 "optimal_io_boundary": 0, 00:15:18.859 "md_size": 0, 00:15:18.859 "dif_type": 0, 00:15:18.859 "dif_is_head_of_md": false, 00:15:18.859 "dif_pi_format": 0 00:15:18.859 } 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "method": "bdev_wait_for_examine" 00:15:18.859 } 00:15:18.859 ] 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "subsystem": "nbd", 00:15:18.859 "config": [] 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "subsystem": "scheduler", 00:15:18.859 "config": [ 00:15:18.859 { 00:15:18.859 "method": "framework_set_scheduler", 00:15:18.859 "params": { 00:15:18.859 "name": "static" 00:15:18.859 } 00:15:18.859 } 00:15:18.859 ] 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "subsystem": "nvmf", 00:15:18.859 "config": [ 00:15:18.859 { 00:15:18.859 "method": "nvmf_set_config", 00:15:18.859 "params": { 00:15:18.859 "discovery_filter": "match_any", 00:15:18.859 "admin_cmd_passthru": { 00:15:18.859 "identify_ctrlr": false 00:15:18.859 }, 00:15:18.859 "dhchap_digests": [ 00:15:18.859 "sha256", 00:15:18.859 "sha384", 00:15:18.859 "sha512" 00:15:18.859 ], 00:15:18.859 "dhchap_dhgroups": [ 00:15:18.859 "null", 00:15:18.859 "ffdhe2048", 00:15:18.859 "ffdhe3072", 00:15:18.859 "ffdhe4096", 00:15:18.859 "ffdhe6144", 00:15:18.859 "ffdhe8192" 00:15:18.859 ] 00:15:18.859 } 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "method": "nvmf_set_max_subsystems", 00:15:18.859 "params": { 00:15:18.859 "max_subsystems": 1024 00:15:18.859 } 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "method": "nvmf_set_crdt", 00:15:18.859 "params": { 00:15:18.859 "crdt1": 0, 00:15:18.859 "crdt2": 0, 00:15:18.859 "crdt3": 0 00:15:18.859 } 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "method": "nvmf_create_transport", 00:15:18.859 "params": { 00:15:18.859 "trtype": "TCP", 00:15:18.859 "max_queue_depth": 128, 00:15:18.859 "max_io_qpairs_per_ctrlr": 127, 00:15:18.859 "in_capsule_data_size": 4096, 00:15:18.859 "max_io_size": 131072, 00:15:18.859 "io_unit_size": 131072, 00:15:18.859 "max_aq_depth": 128, 00:15:18.859 "num_shared_buffers": 511, 00:15:18.859 "buf_cache_size": 4294967295, 00:15:18.859 "dif_insert_or_strip": false, 00:15:18.859 "zcopy": false, 00:15:18.859 "c2h_success": false, 00:15:18.859 "sock_priority": 0, 00:15:18.859 "abort_timeout_sec": 1, 00:15:18.859 "ack_timeout": 0, 00:15:18.859 "data_wr_pool_size": 0 00:15:18.859 } 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "method": "nvmf_create_subsystem", 00:15:18.859 "params": { 00:15:18.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.859 "allow_any_host": false, 00:15:18.859 "serial_number": "SPDK00000000000001", 00:15:18.859 "model_number": "SPDK bdev Controller", 00:15:18.859 "max_namespaces": 10, 00:15:18.859 "min_cntlid": 1, 00:15:18.859 "max_cntlid": 65519, 00:15:18.859 "ana_reporting": false 00:15:18.859 } 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "method": "nvmf_subsystem_add_host", 00:15:18.859 "params": { 00:15:18.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.859 "host": "nqn.2016-06.io.spdk:host1", 00:15:18.859 "psk": "key0" 00:15:18.859 } 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "method": "nvmf_subsystem_add_ns", 00:15:18.859 "params": { 00:15:18.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.859 "namespace": { 00:15:18.859 "nsid": 1, 00:15:18.859 "bdev_name": "malloc0", 00:15:18.859 "nguid": "1126B3E247D541D5BE6A07A3FEAAFFAE", 00:15:18.859 "uuid": "1126b3e2-47d5-41d5-be6a-07a3feaaffae", 00:15:18.859 "no_auto_visible": false 00:15:18.859 } 00:15:18.859 } 00:15:18.859 }, 00:15:18.859 { 00:15:18.859 "method": "nvmf_subsystem_add_listener", 00:15:18.859 "params": { 00:15:18.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.859 "listen_address": { 00:15:18.859 "trtype": "TCP", 00:15:18.859 "adrfam": "IPv4", 00:15:18.859 "traddr": "10.0.0.3", 00:15:18.859 "trsvcid": "4420" 00:15:18.859 }, 00:15:18.859 "secure_channel": true 00:15:18.859 } 00:15:18.859 } 00:15:18.859 ] 00:15:18.859 } 00:15:18.859 ] 00:15:18.859 }' 00:15:18.859 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:19.428 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:19.428 "subsystems": [ 00:15:19.428 { 00:15:19.428 "subsystem": "keyring", 00:15:19.428 "config": [ 00:15:19.428 { 00:15:19.428 "method": "keyring_file_add_key", 00:15:19.428 "params": { 00:15:19.428 "name": "key0", 00:15:19.428 "path": "/tmp/tmp.kfJwOpD591" 00:15:19.428 } 00:15:19.428 } 00:15:19.428 ] 00:15:19.428 }, 00:15:19.428 { 00:15:19.428 "subsystem": "iobuf", 00:15:19.428 "config": [ 00:15:19.428 { 00:15:19.428 "method": "iobuf_set_options", 00:15:19.428 "params": { 00:15:19.428 "small_pool_count": 8192, 00:15:19.428 "large_pool_count": 1024, 00:15:19.428 "small_bufsize": 8192, 00:15:19.428 "large_bufsize": 135168, 00:15:19.428 "enable_numa": false 00:15:19.428 } 00:15:19.428 } 00:15:19.428 ] 00:15:19.428 }, 00:15:19.428 { 00:15:19.428 "subsystem": "sock", 00:15:19.428 "config": [ 00:15:19.428 { 00:15:19.428 "method": "sock_set_default_impl", 00:15:19.428 "params": { 00:15:19.428 "impl_name": "uring" 00:15:19.428 } 00:15:19.428 }, 00:15:19.428 { 00:15:19.428 "method": "sock_impl_set_options", 00:15:19.428 "params": { 00:15:19.428 "impl_name": "ssl", 00:15:19.428 "recv_buf_size": 4096, 00:15:19.428 "send_buf_size": 4096, 00:15:19.428 "enable_recv_pipe": true, 00:15:19.429 "enable_quickack": false, 00:15:19.429 "enable_placement_id": 0, 00:15:19.429 "enable_zerocopy_send_server": true, 00:15:19.429 "enable_zerocopy_send_client": false, 00:15:19.429 "zerocopy_threshold": 0, 00:15:19.429 "tls_version": 0, 00:15:19.429 "enable_ktls": false 00:15:19.429 } 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "method": "sock_impl_set_options", 00:15:19.429 "params": { 00:15:19.429 "impl_name": "posix", 00:15:19.429 "recv_buf_size": 2097152, 00:15:19.429 "send_buf_size": 2097152, 00:15:19.429 "enable_recv_pipe": true, 00:15:19.429 "enable_quickack": false, 00:15:19.429 "enable_placement_id": 0, 00:15:19.429 "enable_zerocopy_send_server": true, 00:15:19.429 "enable_zerocopy_send_client": false, 00:15:19.429 "zerocopy_threshold": 0, 00:15:19.429 "tls_version": 0, 00:15:19.429 "enable_ktls": false 00:15:19.429 } 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "method": "sock_impl_set_options", 00:15:19.429 "params": { 00:15:19.429 "impl_name": "uring", 00:15:19.429 "recv_buf_size": 2097152, 00:15:19.429 "send_buf_size": 2097152, 00:15:19.429 "enable_recv_pipe": true, 00:15:19.429 "enable_quickack": false, 00:15:19.429 "enable_placement_id": 0, 00:15:19.429 "enable_zerocopy_send_server": false, 00:15:19.429 "enable_zerocopy_send_client": false, 00:15:19.429 "zerocopy_threshold": 0, 00:15:19.429 "tls_version": 0, 00:15:19.429 "enable_ktls": false 00:15:19.429 } 00:15:19.429 } 00:15:19.429 ] 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "subsystem": "vmd", 00:15:19.429 "config": [] 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "subsystem": "accel", 00:15:19.429 "config": [ 00:15:19.429 { 00:15:19.429 "method": "accel_set_options", 00:15:19.429 "params": { 00:15:19.429 "small_cache_size": 128, 00:15:19.429 "large_cache_size": 16, 00:15:19.429 "task_count": 2048, 00:15:19.429 "sequence_count": 2048, 00:15:19.429 "buf_count": 2048 00:15:19.429 } 00:15:19.429 } 00:15:19.429 ] 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "subsystem": "bdev", 00:15:19.429 "config": [ 00:15:19.429 { 00:15:19.429 "method": "bdev_set_options", 00:15:19.429 "params": { 00:15:19.429 "bdev_io_pool_size": 65535, 00:15:19.429 "bdev_io_cache_size": 256, 00:15:19.429 "bdev_auto_examine": true, 00:15:19.429 "iobuf_small_cache_size": 128, 00:15:19.429 "iobuf_large_cache_size": 16 00:15:19.429 } 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "method": "bdev_raid_set_options", 00:15:19.429 "params": { 00:15:19.429 "process_window_size_kb": 1024, 00:15:19.429 "process_max_bandwidth_mb_sec": 0 00:15:19.429 } 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "method": "bdev_iscsi_set_options", 00:15:19.429 "params": { 00:15:19.429 "timeout_sec": 30 00:15:19.429 } 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "method": "bdev_nvme_set_options", 00:15:19.429 "params": { 00:15:19.429 "action_on_timeout": "none", 00:15:19.429 "timeout_us": 0, 00:15:19.429 "timeout_admin_us": 0, 00:15:19.429 "keep_alive_timeout_ms": 10000, 00:15:19.429 "arbitration_burst": 0, 00:15:19.429 "low_priority_weight": 0, 00:15:19.429 "medium_priority_weight": 0, 00:15:19.429 "high_priority_weight": 0, 00:15:19.429 "nvme_adminq_poll_period_us": 10000, 00:15:19.429 "nvme_ioq_poll_period_us": 0, 00:15:19.429 "io_queue_requests": 512, 00:15:19.429 "delay_cmd_submit": true, 00:15:19.429 "transport_retry_count": 4, 00:15:19.429 "bdev_retry_count": 3, 00:15:19.429 "transport_ack_timeout": 0, 00:15:19.429 "ctrlr_loss_timeout_sec": 0, 00:15:19.429 "reconnect_delay_sec": 0, 00:15:19.429 "fast_io_fail_timeout_sec": 0, 00:15:19.429 "disable_auto_failback": false, 00:15:19.429 "generate_uuids": false, 00:15:19.429 "transport_tos": 0, 00:15:19.429 "nvme_error_stat": false, 00:15:19.429 "rdma_srq_size": 0, 00:15:19.429 "io_path_stat": false, 00:15:19.429 "allow_accel_sequence": false, 00:15:19.429 "rdma_max_cq_size": 0, 00:15:19.429 "rdma_cm_event_timeout_ms": 0, 00:15:19.429 "dhchap_digests": [ 00:15:19.429 "sha256", 00:15:19.429 "sha384", 00:15:19.429 "sha512" 00:15:19.429 ], 00:15:19.429 "dhchap_dhgroups": [ 00:15:19.429 "null", 00:15:19.429 "ffdhe2048", 00:15:19.429 "ffdhe3072", 00:15:19.429 "ffdhe4096", 00:15:19.429 "ffdhe6144", 00:15:19.429 "ffdhe8192" 00:15:19.429 ] 00:15:19.429 } 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "method": "bdev_nvme_attach_controller", 00:15:19.429 "params": { 00:15:19.429 "name": "TLSTEST", 00:15:19.429 "trtype": "TCP", 00:15:19.429 "adrfam": "IPv4", 00:15:19.429 "traddr": "10.0.0.3", 00:15:19.429 "trsvcid": "4420", 00:15:19.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.429 "prchk_reftag": false, 00:15:19.429 "prchk_guard": false, 00:15:19.429 "ctrlr_loss_timeout_sec": 0, 00:15:19.429 "reconnect_delay_sec": 0, 00:15:19.429 "fast_io_fail_timeout_sec": 0, 00:15:19.429 "psk": "key0", 00:15:19.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.429 "hdgst": false, 00:15:19.429 "ddgst": false, 00:15:19.429 "multipath": "multipath" 00:15:19.429 } 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "method": "bdev_nvme_set_hotplug", 00:15:19.429 "params": { 00:15:19.429 "period_us": 100000, 00:15:19.429 "enable": false 00:15:19.429 } 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "method": "bdev_wait_for_examine" 00:15:19.429 } 00:15:19.429 ] 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "subsystem": "nbd", 00:15:19.429 "config": [] 00:15:19.429 } 00:15:19.429 ] 00:15:19.429 }' 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71917 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71917 ']' 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71917 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71917 00:15:19.429 killing process with pid 71917 00:15:19.429 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.429 00:15:19.429 Latency(us) 00:15:19.429 [2024-12-06T09:52:44.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.429 [2024-12-06T09:52:44.701Z] =================================================================================================================== 00:15:19.429 [2024-12-06T09:52:44.701Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71917' 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71917 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71917 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71856 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71856 ']' 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71856 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.429 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71856 00:15:19.688 killing process with pid 71856 00:15:19.688 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:19.688 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:19.688 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71856' 00:15:19.688 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71856 00:15:19.688 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71856 00:15:19.957 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:19.957 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.957 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.957 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:19.957 "subsystems": [ 00:15:19.957 { 00:15:19.957 "subsystem": "keyring", 00:15:19.957 "config": [ 00:15:19.957 { 00:15:19.957 "method": "keyring_file_add_key", 00:15:19.957 "params": { 00:15:19.957 "name": "key0", 00:15:19.957 "path": "/tmp/tmp.kfJwOpD591" 00:15:19.957 } 00:15:19.957 } 00:15:19.957 ] 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "subsystem": "iobuf", 00:15:19.957 "config": [ 00:15:19.957 { 00:15:19.957 "method": "iobuf_set_options", 00:15:19.957 "params": { 00:15:19.957 "small_pool_count": 8192, 00:15:19.957 "large_pool_count": 1024, 00:15:19.957 "small_bufsize": 8192, 00:15:19.957 "large_bufsize": 135168, 00:15:19.957 "enable_numa": false 00:15:19.957 } 00:15:19.957 } 00:15:19.957 ] 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "subsystem": "sock", 00:15:19.957 "config": [ 00:15:19.957 { 00:15:19.957 "method": "sock_set_default_impl", 00:15:19.957 "params": { 00:15:19.957 "impl_name": "uring" 00:15:19.957 } 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "method": "sock_impl_set_options", 00:15:19.957 "params": { 00:15:19.957 "impl_name": "ssl", 00:15:19.957 "recv_buf_size": 4096, 00:15:19.957 "send_buf_size": 4096, 00:15:19.957 "enable_recv_pipe": true, 00:15:19.957 "enable_quickack": false, 00:15:19.957 "enable_placement_id": 0, 00:15:19.957 "enable_zerocopy_send_server": true, 00:15:19.957 "enable_zerocopy_send_client": false, 00:15:19.957 "zerocopy_threshold": 0, 00:15:19.957 "tls_version": 0, 00:15:19.957 "enable_ktls": false 00:15:19.957 } 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "method": "sock_impl_set_options", 00:15:19.957 "params": { 00:15:19.957 "impl_name": "posix", 00:15:19.957 "recv_buf_size": 2097152, 00:15:19.957 "send_buf_size": 2097152, 00:15:19.957 "enable_recv_pipe": true, 00:15:19.957 "enable_quickack": false, 00:15:19.957 "enable_placement_id": 0, 00:15:19.957 "enable_zerocopy_send_server": true, 00:15:19.957 "enable_zerocopy_send_client": false, 00:15:19.957 "zerocopy_threshold": 0, 00:15:19.957 "tls_version": 0, 00:15:19.957 "enable_ktls": false 00:15:19.957 } 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "method": "sock_impl_set_options", 00:15:19.957 "params": { 00:15:19.957 "impl_name": "uring", 00:15:19.957 "recv_buf_size": 2097152, 00:15:19.957 "send_buf_size": 2097152, 00:15:19.957 "enable_recv_pipe": true, 00:15:19.957 "enable_quickack": false, 00:15:19.957 "enable_placement_id": 0, 00:15:19.957 "enable_zerocopy_send_server": false, 00:15:19.957 "enable_zerocopy_send_client": false, 00:15:19.957 "zerocopy_threshold": 0, 00:15:19.957 "tls_version": 0, 00:15:19.957 "enable_ktls": false 00:15:19.957 } 00:15:19.957 } 00:15:19.957 ] 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "subsystem": "vmd", 00:15:19.957 "config": [] 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "subsystem": "accel", 00:15:19.957 "config": [ 00:15:19.957 { 00:15:19.957 "method": "accel_set_options", 00:15:19.957 "params": { 00:15:19.957 "small_cache_size": 128, 00:15:19.957 "large_cache_size": 16, 00:15:19.957 "task_count": 2048, 00:15:19.957 "sequence_count": 2048, 00:15:19.957 "buf_count": 2048 00:15:19.957 } 00:15:19.957 } 00:15:19.957 ] 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "subsystem": "bdev", 00:15:19.957 "config": [ 00:15:19.957 { 00:15:19.957 "method": "bdev_set_options", 00:15:19.957 "params": { 00:15:19.957 "bdev_io_pool_size": 65535, 00:15:19.957 "bdev_io_cache_size": 256, 00:15:19.957 "bdev_auto_examine": true, 00:15:19.957 "iobuf_small_cache_size": 128, 00:15:19.957 "iobuf_large_cache_size": 16 00:15:19.957 } 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "method": "bdev_raid_set_options", 00:15:19.957 "params": { 00:15:19.957 "process_window_size_kb": 1024, 00:15:19.957 "process_max_bandwidth_mb_sec": 0 00:15:19.957 } 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "method": "bdev_iscsi_set_options", 00:15:19.957 "params": { 00:15:19.957 "timeout_sec": 30 00:15:19.957 } 00:15:19.957 }, 00:15:19.957 { 00:15:19.957 "method": "bdev_nvme_set_options", 00:15:19.957 "params": { 00:15:19.957 "action_on_timeout": "none", 00:15:19.957 "timeout_us": 0, 00:15:19.957 "timeout_admin_us": 0, 00:15:19.957 "keep_alive_timeout_ms": 10000, 00:15:19.957 "arbitration_burst": 0, 00:15:19.957 "low_priority_weight": 0, 00:15:19.957 "medium_priority_weight": 0, 00:15:19.957 "high_priority_weight": 0, 00:15:19.957 "nvme_adminq_poll_period_us": 10000, 00:15:19.957 "nvme_ioq_poll_period_us": 0, 00:15:19.957 "io_queue_requests": 0, 00:15:19.957 "delay_cmd_submit": true, 00:15:19.957 "transport_retry_count": 4, 00:15:19.957 "bdev_retry_count": 3, 00:15:19.957 "transport_ack_timeout": 0, 00:15:19.957 "ctrlr_loss_timeout_sec": 0, 00:15:19.957 "reconnect_delay_sec": 0, 00:15:19.957 "fast_io_fail_timeout_sec": 0, 00:15:19.958 "disable_auto_failback": false, 00:15:19.958 "generate_uuids": false, 00:15:19.958 "transport_tos": 0, 00:15:19.958 "nvme_error_stat": false, 00:15:19.958 "rdma_srq_size": 0, 00:15:19.958 "io_path_stat": false, 00:15:19.958 "allow_accel_sequence": false, 00:15:19.958 "rdma_max_cq_size": 0, 00:15:19.958 "rdma_cm_event_timeout_ms": 0, 00:15:19.958 "dhchap_digests": [ 00:15:19.958 "sha256", 00:15:19.958 "sha384", 00:15:19.958 "sha512" 00:15:19.958 ], 00:15:19.958 "dhchap_dhgroups": [ 00:15:19.958 "null", 00:15:19.958 "ffdhe2048", 00:15:19.958 "ffdhe3072", 00:15:19.958 "ffdhe4096", 00:15:19.958 "ffdhe6144", 00:15:19.958 "ffdhe8192" 00:15:19.958 ] 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "bdev_nvme_set_hotplug", 00:15:19.958 "params": { 00:15:19.958 "period_us": 100000, 00:15:19.958 "enable": false 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "bdev_malloc_create", 00:15:19.958 "params": { 00:15:19.958 "name": "malloc0", 00:15:19.958 "num_blocks": 8192, 00:15:19.958 "block_size": 4096, 00:15:19.958 "physical_block_size": 4096, 00:15:19.958 "uuid": "1126b3e2-47d5-41d5-be6a-07a3feaaffae", 00:15:19.958 "optimal_io_boundary": 0, 00:15:19.958 "md_size": 0, 00:15:19.958 "dif_type": 0, 00:15:19.958 "dif_is_head_of_md": false, 00:15:19.958 "dif_pi_format": 0 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "bdev_wait_for_examine" 00:15:19.958 } 00:15:19.958 ] 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "subsystem": "nbd", 00:15:19.958 "config": [] 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "subsystem": "scheduler", 00:15:19.958 "config": [ 00:15:19.958 { 00:15:19.958 "method": "framework_set_scheduler", 00:15:19.958 "params": { 00:15:19.958 "name": "static" 00:15:19.958 } 00:15:19.958 } 00:15:19.958 ] 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "subsystem": "nvmf", 00:15:19.958 "config": [ 00:15:19.958 { 00:15:19.958 "method": "nvmf_set_config", 00:15:19.958 "params": { 00:15:19.958 "discovery_filter": "match_any", 00:15:19.958 "admin_cmd_passthru": { 00:15:19.958 "identify_ctrlr": false 00:15:19.958 }, 00:15:19.958 "dhchap_digests": [ 00:15:19.958 "sha256", 00:15:19.958 "sha384", 00:15:19.958 "sha512" 00:15:19.958 ], 00:15:19.958 "dhchap_dhgroups": [ 00:15:19.958 "null", 00:15:19.958 "ffdhe2048", 00:15:19.958 "ffdhe3072", 00:15:19.958 "ffdhe4096", 00:15:19.958 "ffdhe6144", 00:15:19.958 "ffdhe8192" 00:15:19.958 ] 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "nvmf_set_max_subsystems", 00:15:19.958 "params": { 00:15:19.958 "max_subsystems": 1024 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "nvmf_set_crdt", 00:15:19.958 "params": { 00:15:19.958 "crdt1": 0, 00:15:19.958 "crdt2": 0, 00:15:19.958 "crdt3": 0 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "nvmf_create_transport", 00:15:19.958 "params": { 00:15:19.958 "trtype": "TCP", 00:15:19.958 "max_queue_depth": 128, 00:15:19.958 "max_io_qpairs_per_ctrlr": 127, 00:15:19.958 "in_capsule_data_size": 4096, 00:15:19.958 "max_io_size": 131072, 00:15:19.958 "io_unit_size": 131072, 00:15:19.958 "max_aq_depth": 128, 00:15:19.958 "num_shared_buffers": 511, 00:15:19.958 "buf_cache_size": 4294967295, 00:15:19.958 "dif_insert_or_strip": false, 00:15:19.958 "zcopy": false, 00:15:19.958 "c2h_success": false, 00:15:19.958 "sock_priority": 0, 00:15:19.958 "abort_timeout_sec": 1, 00:15:19.958 "ack_timeout": 0, 00:15:19.958 "data_wr_pool_size": 0 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "nvmf_create_subsystem", 00:15:19.958 "params": { 00:15:19.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.958 "allow_any_host": false, 00:15:19.958 "serial_number": "SPDK00000000000001", 00:15:19.958 "model_number": "SPDK bdev Controller", 00:15:19.958 "max_namespaces": 10, 00:15:19.958 "min_cntlid": 1, 00:15:19.958 "max_cntlid": 65519, 00:15:19.958 "ana_reporting": false 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "nvmf_subsystem_add_host", 00:15:19.958 "params": { 00:15:19.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.958 "host": "nqn.2016-06.io.spdk:host1", 00:15:19.958 "psk": "key0" 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "nvmf_subsystem_add_ns", 00:15:19.958 "params": { 00:15:19.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.958 "namespace": { 00:15:19.958 "nsid": 1, 00:15:19.958 "bdev_name": "malloc0", 00:15:19.958 "nguid": "1126B3E247D541D5BE6A07A3FEAAFFAE", 00:15:19.958 "uuid": "1126b3e2-47d5-41d5-be6a-07a3feaaffae", 00:15:19.958 "no_auto_visible": false 00:15:19.958 } 00:15:19.958 } 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "method": "nvmf_subsystem_add_listener", 00:15:19.958 "params": { 00:15:19.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.958 "listen_address": { 00:15:19.958 "trtype": "TCP", 00:15:19.958 "adrfam": "IPv4", 00:15:19.958 "traddr": "10.0.0.3", 00:15:19.958 "trsvcid": "4420" 00:15:19.958 }, 00:15:19.958 "secure_channel": true 00:15:19.958 } 00:15:19.958 } 00:15:19.958 ] 00:15:19.958 } 00:15:19.958 ] 00:15:19.958 }' 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71959 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71959 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71959 ']' 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.958 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.958 [2024-12-06 09:52:45.040116] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:19.958 [2024-12-06 09:52:45.040192] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.958 [2024-12-06 09:52:45.177894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.217 [2024-12-06 09:52:45.235102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.217 [2024-12-06 09:52:45.235176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.217 [2024-12-06 09:52:45.235198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.217 [2024-12-06 09:52:45.235205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.217 [2024-12-06 09:52:45.235212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.217 [2024-12-06 09:52:45.235762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.217 [2024-12-06 09:52:45.433620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.475 [2024-12-06 09:52:45.536652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.475 [2024-12-06 09:52:45.568564] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:20.475 [2024-12-06 09:52:45.568796] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:21.041 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.041 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:21.041 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:21.041 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:21.041 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71991 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71991 /var/tmp/bdevperf.sock 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71991 ']' 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:21.042 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:21.042 "subsystems": [ 00:15:21.042 { 00:15:21.042 "subsystem": "keyring", 00:15:21.042 "config": [ 00:15:21.042 { 00:15:21.042 "method": "keyring_file_add_key", 00:15:21.042 "params": { 00:15:21.042 "name": "key0", 00:15:21.042 "path": "/tmp/tmp.kfJwOpD591" 00:15:21.042 } 00:15:21.042 } 00:15:21.042 ] 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "subsystem": "iobuf", 00:15:21.042 "config": [ 00:15:21.042 { 00:15:21.042 "method": "iobuf_set_options", 00:15:21.042 "params": { 00:15:21.042 "small_pool_count": 8192, 00:15:21.042 "large_pool_count": 1024, 00:15:21.042 "small_bufsize": 8192, 00:15:21.042 "large_bufsize": 135168, 00:15:21.042 "enable_numa": false 00:15:21.042 } 00:15:21.042 } 00:15:21.042 ] 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "subsystem": "sock", 00:15:21.042 "config": [ 00:15:21.042 { 00:15:21.042 "method": "sock_set_default_impl", 00:15:21.042 "params": { 00:15:21.042 "impl_name": "uring" 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "sock_impl_set_options", 00:15:21.042 "params": { 00:15:21.042 "impl_name": "ssl", 00:15:21.042 "recv_buf_size": 4096, 00:15:21.042 "send_buf_size": 4096, 00:15:21.042 "enable_recv_pipe": true, 00:15:21.042 "enable_quickack": false, 00:15:21.042 "enable_placement_id": 0, 00:15:21.042 "enable_zerocopy_send_server": true, 00:15:21.042 "enable_zerocopy_send_client": false, 00:15:21.042 "zerocopy_threshold": 0, 00:15:21.042 "tls_version": 0, 00:15:21.042 "enable_ktls": false 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "sock_impl_set_options", 00:15:21.042 "params": { 00:15:21.042 "impl_name": "posix", 00:15:21.042 "recv_buf_size": 2097152, 00:15:21.042 "send_buf_size": 2097152, 00:15:21.042 "enable_recv_pipe": true, 00:15:21.042 "enable_quickack": false, 00:15:21.042 "enable_placement_id": 0, 00:15:21.042 "enable_zerocopy_send_server": true, 00:15:21.042 "enable_zerocopy_send_client": false, 00:15:21.042 "zerocopy_threshold": 0, 00:15:21.042 "tls_version": 0, 00:15:21.042 "enable_ktls": false 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "sock_impl_set_options", 00:15:21.042 "params": { 00:15:21.042 "impl_name": "uring", 00:15:21.042 "recv_buf_size": 2097152, 00:15:21.042 "send_buf_size": 2097152, 00:15:21.042 "enable_recv_pipe": true, 00:15:21.042 "enable_quickack": false, 00:15:21.042 "enable_placement_id": 0, 00:15:21.042 "enable_zerocopy_send_server": false, 00:15:21.042 "enable_zerocopy_send_client": false, 00:15:21.042 "zerocopy_threshold": 0, 00:15:21.042 "tls_version": 0, 00:15:21.042 "enable_ktls": false 00:15:21.042 } 00:15:21.042 } 00:15:21.042 ] 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "subsystem": "vmd", 00:15:21.042 "config": [] 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "subsystem": "accel", 00:15:21.042 "config": [ 00:15:21.042 { 00:15:21.042 "method": "accel_set_options", 00:15:21.042 "params": { 00:15:21.042 "small_cache_size": 128, 00:15:21.042 "large_cache_size": 16, 00:15:21.042 "task_count": 2048, 00:15:21.042 "sequence_count": 2048, 00:15:21.042 "buf_count": 2048 00:15:21.042 } 00:15:21.042 } 00:15:21.042 ] 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "subsystem": "bdev", 00:15:21.042 "config": [ 00:15:21.042 { 00:15:21.042 "method": "bdev_set_options", 00:15:21.042 "params": { 00:15:21.042 "bdev_io_pool_size": 65535, 00:15:21.042 "bdev_io_cache_size": 256, 00:15:21.042 "bdev_auto_examine": true, 00:15:21.042 "iobuf_small_cache_size": 128, 00:15:21.042 "iobuf_large_cache_size": 16 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "bdev_raid_set_options", 00:15:21.042 "params": { 00:15:21.042 "process_window_size_kb": 1024, 00:15:21.042 "process_max_bandwidth_mb_sec": 0 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "bdev_iscsi_set_options", 00:15:21.042 "params": { 00:15:21.042 "timeout_sec": 30 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "bdev_nvme_set_options", 00:15:21.042 "params": { 00:15:21.042 "action_on_timeout": "none", 00:15:21.042 "timeout_us": 0, 00:15:21.042 "timeout_admin_us": 0, 00:15:21.042 "keep_alive_timeout_ms": 10000, 00:15:21.042 "arbitration_burst": 0, 00:15:21.042 "low_priority_weight": 0, 00:15:21.042 "medium_priority_weight": 0, 00:15:21.042 "high_priority_weight": 0, 00:15:21.042 "nvme_adminq_poll_period_us": 10000, 00:15:21.042 "nvme_ioq_poll_period_us": 0, 00:15:21.042 "io_queue_requests": 512, 00:15:21.042 "delay_cmd_submit": true, 00:15:21.042 "transport_retry_count": 4, 00:15:21.042 "bdev_retry_count": 3, 00:15:21.042 "transport_ack_timeout": 0, 00:15:21.042 "ctrlr_loss_timeout_sec": 0, 00:15:21.042 "reconnect_delay_sec": 0, 00:15:21.042 "fast_io_fail_timeout_sec": 0, 00:15:21.042 "disable_auto_failback": false, 00:15:21.042 "generate_uuids": false, 00:15:21.042 "transport_tos": 0, 00:15:21.042 "nvme_error_stat": false, 00:15:21.042 "rdma_srq_size": 0, 00:15:21.042 "io_path_stat": false, 00:15:21.042 "allow_accel_sequence": false, 00:15:21.042 "rdma_max_cq_size": 0, 00:15:21.042 "rdma_cm_event_timeout_ms": 0, 00:15:21.042 "dhchap_digests": [ 00:15:21.042 "sha256", 00:15:21.042 "sha384", 00:15:21.042 "sha512" 00:15:21.042 ], 00:15:21.042 "dhchap_dhgroups": [ 00:15:21.042 "null", 00:15:21.042 "ffdhe2048", 00:15:21.042 "ffdhe3072", 00:15:21.042 "ffdhe4096", 00:15:21.042 "ffdhe6144", 00:15:21.042 "ffdhe8192" 00:15:21.042 ] 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "bdev_nvme_attach_controller", 00:15:21.042 "params": { 00:15:21.042 "name": "TLSTEST", 00:15:21.042 "trtype": "TCP", 00:15:21.042 "adrfam": "IPv4", 00:15:21.042 "traddr": "10.0.0.3", 00:15:21.042 "trsvcid": "4420", 00:15:21.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:21.042 "prchk_reftag": false, 00:15:21.042 "prchk_guard": false, 00:15:21.042 "ctrlr_loss_timeout_sec": 0, 00:15:21.042 "reconnect_delay_sec": 0, 00:15:21.042 "fast_io_fail_timeout_sec": 0, 00:15:21.042 "psk": "key0", 00:15:21.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:21.042 "hdgst": false, 00:15:21.042 "ddgst": false, 00:15:21.042 "multipath": "multipath" 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "bdev_nvme_set_hotplug", 00:15:21.042 "params": { 00:15:21.042 "period_us": 100000, 00:15:21.042 "enable": false 00:15:21.042 } 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "method": "bdev_wait_for_examine" 00:15:21.042 } 00:15:21.043 ] 00:15:21.043 }, 00:15:21.043 { 00:15:21.043 "subsystem": "nbd", 00:15:21.043 "config": [] 00:15:21.043 } 00:15:21.043 ] 00:15:21.043 }' 00:15:21.043 [2024-12-06 09:52:46.175884] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:21.043 [2024-12-06 09:52:46.175990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71991 ] 00:15:21.301 [2024-12-06 09:52:46.328623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.301 [2024-12-06 09:52:46.388345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.301 [2024-12-06 09:52:46.525121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.560 [2024-12-06 09:52:46.576805] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:22.126 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.126 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:22.126 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:22.126 Running I/O for 10 seconds... 00:15:24.440 3491.00 IOPS, 13.64 MiB/s [2024-12-06T09:52:50.648Z] 3583.00 IOPS, 14.00 MiB/s [2024-12-06T09:52:51.586Z] 3718.67 IOPS, 14.53 MiB/s [2024-12-06T09:52:52.523Z] 3873.75 IOPS, 15.13 MiB/s [2024-12-06T09:52:53.460Z] 3935.20 IOPS, 15.37 MiB/s [2024-12-06T09:52:54.398Z] 3932.50 IOPS, 15.36 MiB/s [2024-12-06T09:52:55.778Z] 3850.43 IOPS, 15.04 MiB/s [2024-12-06T09:52:56.714Z] 3787.75 IOPS, 14.80 MiB/s [2024-12-06T09:52:57.653Z] 3741.33 IOPS, 14.61 MiB/s [2024-12-06T09:52:57.653Z] 3699.40 IOPS, 14.45 MiB/s 00:15:32.381 Latency(us) 00:15:32.381 [2024-12-06T09:52:57.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.381 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:32.381 Verification LBA range: start 0x0 length 0x2000 00:15:32.381 TLSTESTn1 : 10.03 3700.83 14.46 0.00 0.00 34508.78 7477.06 26095.24 00:15:32.381 [2024-12-06T09:52:57.653Z] =================================================================================================================== 00:15:32.381 [2024-12-06T09:52:57.653Z] Total : 3700.83 14.46 0.00 0.00 34508.78 7477.06 26095.24 00:15:32.381 { 00:15:32.381 "results": [ 00:15:32.381 { 00:15:32.381 "job": "TLSTESTn1", 00:15:32.381 "core_mask": "0x4", 00:15:32.381 "workload": "verify", 00:15:32.381 "status": "finished", 00:15:32.381 "verify_range": { 00:15:32.381 "start": 0, 00:15:32.381 "length": 8192 00:15:32.381 }, 00:15:32.381 "queue_depth": 128, 00:15:32.381 "io_size": 4096, 00:15:32.381 "runtime": 10.030194, 00:15:32.381 "iops": 3700.8257268004986, 00:15:32.381 "mibps": 14.456350495314448, 00:15:32.381 "io_failed": 0, 00:15:32.381 "io_timeout": 0, 00:15:32.381 "avg_latency_us": 34508.779536050155, 00:15:32.381 "min_latency_us": 7477.061818181818, 00:15:32.381 "max_latency_us": 26095.243636363637 00:15:32.381 } 00:15:32.381 ], 00:15:32.381 "core_count": 1 00:15:32.381 } 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71991 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71991 ']' 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71991 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71991 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71991' 00:15:32.381 killing process with pid 71991 00:15:32.381 Received shutdown signal, test time was about 10.000000 seconds 00:15:32.381 00:15:32.381 Latency(us) 00:15:32.381 [2024-12-06T09:52:57.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.381 [2024-12-06T09:52:57.653Z] =================================================================================================================== 00:15:32.381 [2024-12-06T09:52:57.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71991 00:15:32.381 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71991 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71959 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71959 ']' 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71959 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71959 00:15:32.640 killing process with pid 71959 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71959' 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71959 00:15:32.640 09:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71959 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72125 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72125 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72125 ']' 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.900 09:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.900 [2024-12-06 09:52:58.092228] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:32.900 [2024-12-06 09:52:58.092351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.159 [2024-12-06 09:52:58.244897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.159 [2024-12-06 09:52:58.313563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.159 [2024-12-06 09:52:58.313656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.159 [2024-12-06 09:52:58.313683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.159 [2024-12-06 09:52:58.313693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.159 [2024-12-06 09:52:58.313702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.159 [2024-12-06 09:52:58.314192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.159 [2024-12-06 09:52:58.376803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.kfJwOpD591 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kfJwOpD591 00:15:34.106 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:34.365 [2024-12-06 09:52:59.383922] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.365 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:34.625 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:34.884 [2024-12-06 09:52:59.896104] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:34.884 [2024-12-06 09:52:59.896388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.884 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:34.884 malloc0 00:15:35.143 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:35.143 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:15:35.412 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72186 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72186 /var/tmp/bdevperf.sock 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72186 ']' 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.686 09:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.946 [2024-12-06 09:53:00.962311] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:35.946 [2024-12-06 09:53:00.962594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72186 ] 00:15:35.946 [2024-12-06 09:53:01.110441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.946 [2024-12-06 09:53:01.198590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.206 [2024-12-06 09:53:01.280829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.206 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.206 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:36.206 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:15:36.465 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:36.724 [2024-12-06 09:53:01.877870] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:36.724 nvme0n1 00:15:36.724 09:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:36.983 Running I/O for 1 seconds... 00:15:37.921 3754.00 IOPS, 14.66 MiB/s 00:15:37.921 Latency(us) 00:15:37.921 [2024-12-06T09:53:03.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.921 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:37.921 Verification LBA range: start 0x0 length 0x2000 00:15:37.921 nvme0n1 : 1.02 3801.21 14.85 0.00 0.00 33305.95 7745.16 25737.77 00:15:37.921 [2024-12-06T09:53:03.193Z] =================================================================================================================== 00:15:37.921 [2024-12-06T09:53:03.193Z] Total : 3801.21 14.85 0.00 0.00 33305.95 7745.16 25737.77 00:15:37.921 { 00:15:37.921 "results": [ 00:15:37.921 { 00:15:37.921 "job": "nvme0n1", 00:15:37.921 "core_mask": "0x2", 00:15:37.921 "workload": "verify", 00:15:37.921 "status": "finished", 00:15:37.921 "verify_range": { 00:15:37.921 "start": 0, 00:15:37.921 "length": 8192 00:15:37.921 }, 00:15:37.921 "queue_depth": 128, 00:15:37.921 "io_size": 4096, 00:15:37.921 "runtime": 1.021254, 00:15:37.921 "iops": 3801.2091017513762, 00:15:37.921 "mibps": 14.848473053716313, 00:15:37.921 "io_failed": 0, 00:15:37.921 "io_timeout": 0, 00:15:37.921 "avg_latency_us": 33305.945553838224, 00:15:37.921 "min_latency_us": 7745.163636363636, 00:15:37.921 "max_latency_us": 25737.774545454544 00:15:37.921 } 00:15:37.921 ], 00:15:37.921 "core_count": 1 00:15:37.921 } 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72186 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72186 ']' 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72186 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72186 00:15:37.921 killing process with pid 72186 00:15:37.921 Received shutdown signal, test time was about 1.000000 seconds 00:15:37.921 00:15:37.921 Latency(us) 00:15:37.921 [2024-12-06T09:53:03.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.921 [2024-12-06T09:53:03.193Z] =================================================================================================================== 00:15:37.921 [2024-12-06T09:53:03.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72186' 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72186 00:15:37.921 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72186 00:15:38.180 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72125 00:15:38.180 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72125 ']' 00:15:38.180 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72125 00:15:38.180 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:38.180 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.180 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72125 00:15:38.439 killing process with pid 72125 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72125' 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72125 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72125 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72231 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72231 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72231 ']' 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.439 09:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.698 [2024-12-06 09:53:03.718264] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:38.699 [2024-12-06 09:53:03.718596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.699 [2024-12-06 09:53:03.865390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.699 [2024-12-06 09:53:03.914940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.699 [2024-12-06 09:53:03.915290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.699 [2024-12-06 09:53:03.915326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.699 [2024-12-06 09:53:03.915335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.699 [2024-12-06 09:53:03.915342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.699 [2024-12-06 09:53:03.915854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.958 [2024-12-06 09:53:03.969653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.958 [2024-12-06 09:53:04.086088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.958 malloc0 00:15:38.958 [2024-12-06 09:53:04.117881] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:38.958 [2024-12-06 09:53:04.118127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:38.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72250 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72250 /var/tmp/bdevperf.sock 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72250 ']' 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.958 09:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.958 [2024-12-06 09:53:04.222797] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:38.958 [2024-12-06 09:53:04.223134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72250 ] 00:15:39.217 [2024-12-06 09:53:04.370527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.217 [2024-12-06 09:53:04.443728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.476 [2024-12-06 09:53:04.528016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.045 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.045 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:40.045 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kfJwOpD591 00:15:40.304 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:40.563 [2024-12-06 09:53:05.657112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:40.563 nvme0n1 00:15:40.563 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:40.821 Running I/O for 1 seconds... 00:15:41.758 3647.00 IOPS, 14.25 MiB/s 00:15:41.758 Latency(us) 00:15:41.758 [2024-12-06T09:53:07.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.758 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:41.758 Verification LBA range: start 0x0 length 0x2000 00:15:41.758 nvme0n1 : 1.02 3715.53 14.51 0.00 0.00 34181.13 4974.78 28478.37 00:15:41.758 [2024-12-06T09:53:07.030Z] =================================================================================================================== 00:15:41.758 [2024-12-06T09:53:07.030Z] Total : 3715.53 14.51 0.00 0.00 34181.13 4974.78 28478.37 00:15:41.758 { 00:15:41.758 "results": [ 00:15:41.758 { 00:15:41.758 "job": "nvme0n1", 00:15:41.758 "core_mask": "0x2", 00:15:41.758 "workload": "verify", 00:15:41.758 "status": "finished", 00:15:41.758 "verify_range": { 00:15:41.758 "start": 0, 00:15:41.758 "length": 8192 00:15:41.758 }, 00:15:41.758 "queue_depth": 128, 00:15:41.758 "io_size": 4096, 00:15:41.758 "runtime": 1.016007, 00:15:41.758 "iops": 3715.5255820087855, 00:15:41.758 "mibps": 14.513771804721818, 00:15:41.758 "io_failed": 0, 00:15:41.758 "io_timeout": 0, 00:15:41.758 "avg_latency_us": 34181.131343527995, 00:15:41.758 "min_latency_us": 4974.778181818182, 00:15:41.758 "max_latency_us": 28478.37090909091 00:15:41.758 } 00:15:41.758 ], 00:15:41.758 "core_count": 1 00:15:41.758 } 00:15:41.758 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:41.758 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.758 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.758 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.758 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:41.758 "subsystems": [ 00:15:41.758 { 00:15:41.758 "subsystem": "keyring", 00:15:41.758 "config": [ 00:15:41.758 { 00:15:41.758 "method": "keyring_file_add_key", 00:15:41.758 "params": { 00:15:41.758 "name": "key0", 00:15:41.758 "path": "/tmp/tmp.kfJwOpD591" 00:15:41.758 } 00:15:41.758 } 00:15:41.758 ] 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "subsystem": "iobuf", 00:15:41.758 "config": [ 00:15:41.758 { 00:15:41.758 "method": "iobuf_set_options", 00:15:41.758 "params": { 00:15:41.758 "small_pool_count": 8192, 00:15:41.758 "large_pool_count": 1024, 00:15:41.758 "small_bufsize": 8192, 00:15:41.758 "large_bufsize": 135168, 00:15:41.758 "enable_numa": false 00:15:41.758 } 00:15:41.758 } 00:15:41.758 ] 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "subsystem": "sock", 00:15:41.758 "config": [ 00:15:41.758 { 00:15:41.758 "method": "sock_set_default_impl", 00:15:41.758 "params": { 00:15:41.758 "impl_name": "uring" 00:15:41.758 } 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "method": "sock_impl_set_options", 00:15:41.758 "params": { 00:15:41.758 "impl_name": "ssl", 00:15:41.758 "recv_buf_size": 4096, 00:15:41.758 "send_buf_size": 4096, 00:15:41.758 "enable_recv_pipe": true, 00:15:41.758 "enable_quickack": false, 00:15:41.758 "enable_placement_id": 0, 00:15:41.758 "enable_zerocopy_send_server": true, 00:15:41.758 "enable_zerocopy_send_client": false, 00:15:41.758 "zerocopy_threshold": 0, 00:15:41.758 "tls_version": 0, 00:15:41.758 "enable_ktls": false 00:15:41.758 } 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "method": "sock_impl_set_options", 00:15:41.758 "params": { 00:15:41.758 "impl_name": "posix", 00:15:41.758 "recv_buf_size": 2097152, 00:15:41.758 "send_buf_size": 2097152, 00:15:41.758 "enable_recv_pipe": true, 00:15:41.758 "enable_quickack": false, 00:15:41.758 "enable_placement_id": 0, 00:15:41.758 "enable_zerocopy_send_server": true, 00:15:41.758 "enable_zerocopy_send_client": false, 00:15:41.758 "zerocopy_threshold": 0, 00:15:41.758 "tls_version": 0, 00:15:41.758 "enable_ktls": false 00:15:41.758 } 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "method": "sock_impl_set_options", 00:15:41.758 "params": { 00:15:41.758 "impl_name": "uring", 00:15:41.758 "recv_buf_size": 2097152, 00:15:41.758 "send_buf_size": 2097152, 00:15:41.758 "enable_recv_pipe": true, 00:15:41.758 "enable_quickack": false, 00:15:41.758 "enable_placement_id": 0, 00:15:41.758 "enable_zerocopy_send_server": false, 00:15:41.758 "enable_zerocopy_send_client": false, 00:15:41.758 "zerocopy_threshold": 0, 00:15:41.758 "tls_version": 0, 00:15:41.758 "enable_ktls": false 00:15:41.758 } 00:15:41.758 } 00:15:41.758 ] 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "subsystem": "vmd", 00:15:41.758 "config": [] 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "subsystem": "accel", 00:15:41.758 "config": [ 00:15:41.758 { 00:15:41.758 "method": "accel_set_options", 00:15:41.758 "params": { 00:15:41.758 "small_cache_size": 128, 00:15:41.758 "large_cache_size": 16, 00:15:41.758 "task_count": 2048, 00:15:41.758 "sequence_count": 2048, 00:15:41.758 "buf_count": 2048 00:15:41.758 } 00:15:41.758 } 00:15:41.758 ] 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "subsystem": "bdev", 00:15:41.758 "config": [ 00:15:41.758 { 00:15:41.758 "method": "bdev_set_options", 00:15:41.758 "params": { 00:15:41.758 "bdev_io_pool_size": 65535, 00:15:41.758 "bdev_io_cache_size": 256, 00:15:41.758 "bdev_auto_examine": true, 00:15:41.758 "iobuf_small_cache_size": 128, 00:15:41.758 "iobuf_large_cache_size": 16 00:15:41.758 } 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "method": "bdev_raid_set_options", 00:15:41.758 "params": { 00:15:41.758 "process_window_size_kb": 1024, 00:15:41.758 "process_max_bandwidth_mb_sec": 0 00:15:41.758 } 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "method": "bdev_iscsi_set_options", 00:15:41.758 "params": { 00:15:41.758 "timeout_sec": 30 00:15:41.758 } 00:15:41.758 }, 00:15:41.758 { 00:15:41.758 "method": "bdev_nvme_set_options", 00:15:41.758 "params": { 00:15:41.758 "action_on_timeout": "none", 00:15:41.758 "timeout_us": 0, 00:15:41.758 "timeout_admin_us": 0, 00:15:41.758 "keep_alive_timeout_ms": 10000, 00:15:41.758 "arbitration_burst": 0, 00:15:41.758 "low_priority_weight": 0, 00:15:41.758 "medium_priority_weight": 0, 00:15:41.758 "high_priority_weight": 0, 00:15:41.758 "nvme_adminq_poll_period_us": 10000, 00:15:41.758 "nvme_ioq_poll_period_us": 0, 00:15:41.758 "io_queue_requests": 0, 00:15:41.758 "delay_cmd_submit": true, 00:15:41.758 "transport_retry_count": 4, 00:15:41.758 "bdev_retry_count": 3, 00:15:41.758 "transport_ack_timeout": 0, 00:15:41.758 "ctrlr_loss_timeout_sec": 0, 00:15:41.758 "reconnect_delay_sec": 0, 00:15:41.758 "fast_io_fail_timeout_sec": 0, 00:15:41.758 "disable_auto_failback": false, 00:15:41.758 "generate_uuids": false, 00:15:41.758 "transport_tos": 0, 00:15:41.758 "nvme_error_stat": false, 00:15:41.758 "rdma_srq_size": 0, 00:15:41.758 "io_path_stat": false, 00:15:41.758 "allow_accel_sequence": false, 00:15:41.758 "rdma_max_cq_size": 0, 00:15:41.758 "rdma_cm_event_timeout_ms": 0, 00:15:41.758 "dhchap_digests": [ 00:15:41.759 "sha256", 00:15:41.759 "sha384", 00:15:41.759 "sha512" 00:15:41.759 ], 00:15:41.759 "dhchap_dhgroups": [ 00:15:41.759 "null", 00:15:41.759 "ffdhe2048", 00:15:41.759 "ffdhe3072", 00:15:41.759 "ffdhe4096", 00:15:41.759 "ffdhe6144", 00:15:41.759 "ffdhe8192" 00:15:41.759 ] 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "bdev_nvme_set_hotplug", 00:15:41.759 "params": { 00:15:41.759 "period_us": 100000, 00:15:41.759 "enable": false 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "bdev_malloc_create", 00:15:41.759 "params": { 00:15:41.759 "name": "malloc0", 00:15:41.759 "num_blocks": 8192, 00:15:41.759 "block_size": 4096, 00:15:41.759 "physical_block_size": 4096, 00:15:41.759 "uuid": "59183236-b90f-4aa7-a1ae-647a48cfcbf6", 00:15:41.759 "optimal_io_boundary": 0, 00:15:41.759 "md_size": 0, 00:15:41.759 "dif_type": 0, 00:15:41.759 "dif_is_head_of_md": false, 00:15:41.759 "dif_pi_format": 0 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "bdev_wait_for_examine" 00:15:41.759 } 00:15:41.759 ] 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "subsystem": "nbd", 00:15:41.759 "config": [] 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "subsystem": "scheduler", 00:15:41.759 "config": [ 00:15:41.759 { 00:15:41.759 "method": "framework_set_scheduler", 00:15:41.759 "params": { 00:15:41.759 "name": "static" 00:15:41.759 } 00:15:41.759 } 00:15:41.759 ] 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "subsystem": "nvmf", 00:15:41.759 "config": [ 00:15:41.759 { 00:15:41.759 "method": "nvmf_set_config", 00:15:41.759 "params": { 00:15:41.759 "discovery_filter": "match_any", 00:15:41.759 "admin_cmd_passthru": { 00:15:41.759 "identify_ctrlr": false 00:15:41.759 }, 00:15:41.759 "dhchap_digests": [ 00:15:41.759 "sha256", 00:15:41.759 "sha384", 00:15:41.759 "sha512" 00:15:41.759 ], 00:15:41.759 "dhchap_dhgroups": [ 00:15:41.759 "null", 00:15:41.759 "ffdhe2048", 00:15:41.759 "ffdhe3072", 00:15:41.759 "ffdhe4096", 00:15:41.759 "ffdhe6144", 00:15:41.759 "ffdhe8192" 00:15:41.759 ] 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "nvmf_set_max_subsystems", 00:15:41.759 "params": { 00:15:41.759 "max_subsystems": 1024 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "nvmf_set_crdt", 00:15:41.759 "params": { 00:15:41.759 "crdt1": 0, 00:15:41.759 "crdt2": 0, 00:15:41.759 "crdt3": 0 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "nvmf_create_transport", 00:15:41.759 "params": { 00:15:41.759 "trtype": "TCP", 00:15:41.759 "max_queue_depth": 128, 00:15:41.759 "max_io_qpairs_per_ctrlr": 127, 00:15:41.759 "in_capsule_data_size": 4096, 00:15:41.759 "max_io_size": 131072, 00:15:41.759 "io_unit_size": 131072, 00:15:41.759 "max_aq_depth": 128, 00:15:41.759 "num_shared_buffers": 511, 00:15:41.759 "buf_cache_size": 4294967295, 00:15:41.759 "dif_insert_or_strip": false, 00:15:41.759 "zcopy": false, 00:15:41.759 "c2h_success": false, 00:15:41.759 "sock_priority": 0, 00:15:41.759 "abort_timeout_sec": 1, 00:15:41.759 "ack_timeout": 0, 00:15:41.759 "data_wr_pool_size": 0 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "nvmf_create_subsystem", 00:15:41.759 "params": { 00:15:41.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.759 "allow_any_host": false, 00:15:41.759 "serial_number": "00000000000000000000", 00:15:41.759 "model_number": "SPDK bdev Controller", 00:15:41.759 "max_namespaces": 32, 00:15:41.759 "min_cntlid": 1, 00:15:41.759 "max_cntlid": 65519, 00:15:41.759 "ana_reporting": false 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "nvmf_subsystem_add_host", 00:15:41.759 "params": { 00:15:41.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.759 "host": "nqn.2016-06.io.spdk:host1", 00:15:41.759 "psk": "key0" 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "nvmf_subsystem_add_ns", 00:15:41.759 "params": { 00:15:41.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.759 "namespace": { 00:15:41.759 "nsid": 1, 00:15:41.759 "bdev_name": "malloc0", 00:15:41.759 "nguid": "59183236B90F4AA7A1AE647A48CFCBF6", 00:15:41.759 "uuid": "59183236-b90f-4aa7-a1ae-647a48cfcbf6", 00:15:41.759 "no_auto_visible": false 00:15:41.759 } 00:15:41.759 } 00:15:41.759 }, 00:15:41.759 { 00:15:41.759 "method": "nvmf_subsystem_add_listener", 00:15:41.759 "params": { 00:15:41.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.759 "listen_address": { 00:15:41.759 "trtype": "TCP", 00:15:41.759 "adrfam": "IPv4", 00:15:41.759 "traddr": "10.0.0.3", 00:15:41.759 "trsvcid": "4420" 00:15:41.759 }, 00:15:41.759 "secure_channel": false, 00:15:41.759 "sock_impl": "ssl" 00:15:41.759 } 00:15:41.759 } 00:15:41.759 ] 00:15:41.759 } 00:15:41.759 ] 00:15:41.759 }' 00:15:41.759 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:42.326 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:42.326 "subsystems": [ 00:15:42.326 { 00:15:42.326 "subsystem": "keyring", 00:15:42.326 "config": [ 00:15:42.326 { 00:15:42.326 "method": "keyring_file_add_key", 00:15:42.326 "params": { 00:15:42.326 "name": "key0", 00:15:42.326 "path": "/tmp/tmp.kfJwOpD591" 00:15:42.326 } 00:15:42.326 } 00:15:42.326 ] 00:15:42.326 }, 00:15:42.326 { 00:15:42.326 "subsystem": "iobuf", 00:15:42.326 "config": [ 00:15:42.326 { 00:15:42.326 "method": "iobuf_set_options", 00:15:42.326 "params": { 00:15:42.326 "small_pool_count": 8192, 00:15:42.326 "large_pool_count": 1024, 00:15:42.326 "small_bufsize": 8192, 00:15:42.326 "large_bufsize": 135168, 00:15:42.326 "enable_numa": false 00:15:42.326 } 00:15:42.326 } 00:15:42.326 ] 00:15:42.326 }, 00:15:42.326 { 00:15:42.326 "subsystem": "sock", 00:15:42.326 "config": [ 00:15:42.327 { 00:15:42.327 "method": "sock_set_default_impl", 00:15:42.327 "params": { 00:15:42.327 "impl_name": "uring" 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "sock_impl_set_options", 00:15:42.327 "params": { 00:15:42.327 "impl_name": "ssl", 00:15:42.327 "recv_buf_size": 4096, 00:15:42.327 "send_buf_size": 4096, 00:15:42.327 "enable_recv_pipe": true, 00:15:42.327 "enable_quickack": false, 00:15:42.327 "enable_placement_id": 0, 00:15:42.327 "enable_zerocopy_send_server": true, 00:15:42.327 "enable_zerocopy_send_client": false, 00:15:42.327 "zerocopy_threshold": 0, 00:15:42.327 "tls_version": 0, 00:15:42.327 "enable_ktls": false 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "sock_impl_set_options", 00:15:42.327 "params": { 00:15:42.327 "impl_name": "posix", 00:15:42.327 "recv_buf_size": 2097152, 00:15:42.327 "send_buf_size": 2097152, 00:15:42.327 "enable_recv_pipe": true, 00:15:42.327 "enable_quickack": false, 00:15:42.327 "enable_placement_id": 0, 00:15:42.327 "enable_zerocopy_send_server": true, 00:15:42.327 "enable_zerocopy_send_client": false, 00:15:42.327 "zerocopy_threshold": 0, 00:15:42.327 "tls_version": 0, 00:15:42.327 "enable_ktls": false 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "sock_impl_set_options", 00:15:42.327 "params": { 00:15:42.327 "impl_name": "uring", 00:15:42.327 "recv_buf_size": 2097152, 00:15:42.327 "send_buf_size": 2097152, 00:15:42.327 "enable_recv_pipe": true, 00:15:42.327 "enable_quickack": false, 00:15:42.327 "enable_placement_id": 0, 00:15:42.327 "enable_zerocopy_send_server": false, 00:15:42.327 "enable_zerocopy_send_client": false, 00:15:42.327 "zerocopy_threshold": 0, 00:15:42.327 "tls_version": 0, 00:15:42.327 "enable_ktls": false 00:15:42.327 } 00:15:42.327 } 00:15:42.327 ] 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "subsystem": "vmd", 00:15:42.327 "config": [] 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "subsystem": "accel", 00:15:42.327 "config": [ 00:15:42.327 { 00:15:42.327 "method": "accel_set_options", 00:15:42.327 "params": { 00:15:42.327 "small_cache_size": 128, 00:15:42.327 "large_cache_size": 16, 00:15:42.327 "task_count": 2048, 00:15:42.327 "sequence_count": 2048, 00:15:42.327 "buf_count": 2048 00:15:42.327 } 00:15:42.327 } 00:15:42.327 ] 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "subsystem": "bdev", 00:15:42.327 "config": [ 00:15:42.327 { 00:15:42.327 "method": "bdev_set_options", 00:15:42.327 "params": { 00:15:42.327 "bdev_io_pool_size": 65535, 00:15:42.327 "bdev_io_cache_size": 256, 00:15:42.327 "bdev_auto_examine": true, 00:15:42.327 "iobuf_small_cache_size": 128, 00:15:42.327 "iobuf_large_cache_size": 16 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "bdev_raid_set_options", 00:15:42.327 "params": { 00:15:42.327 "process_window_size_kb": 1024, 00:15:42.327 "process_max_bandwidth_mb_sec": 0 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "bdev_iscsi_set_options", 00:15:42.327 "params": { 00:15:42.327 "timeout_sec": 30 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "bdev_nvme_set_options", 00:15:42.327 "params": { 00:15:42.327 "action_on_timeout": "none", 00:15:42.327 "timeout_us": 0, 00:15:42.327 "timeout_admin_us": 0, 00:15:42.327 "keep_alive_timeout_ms": 10000, 00:15:42.327 "arbitration_burst": 0, 00:15:42.327 "low_priority_weight": 0, 00:15:42.327 "medium_priority_weight": 0, 00:15:42.327 "high_priority_weight": 0, 00:15:42.327 "nvme_adminq_poll_period_us": 10000, 00:15:42.327 "nvme_ioq_poll_period_us": 0, 00:15:42.327 "io_queue_requests": 512, 00:15:42.327 "delay_cmd_submit": true, 00:15:42.327 "transport_retry_count": 4, 00:15:42.327 "bdev_retry_count": 3, 00:15:42.327 "transport_ack_timeout": 0, 00:15:42.327 "ctrlr_loss_timeout_sec": 0, 00:15:42.327 "reconnect_delay_sec": 0, 00:15:42.327 "fast_io_fail_timeout_sec": 0, 00:15:42.327 "disable_auto_failback": false, 00:15:42.327 "generate_uuids": false, 00:15:42.327 "transport_tos": 0, 00:15:42.327 "nvme_error_stat": false, 00:15:42.327 "rdma_srq_size": 0, 00:15:42.327 "io_path_stat": false, 00:15:42.327 "allow_accel_sequence": false, 00:15:42.327 "rdma_max_cq_size": 0, 00:15:42.327 "rdma_cm_event_timeout_ms": 0, 00:15:42.327 "dhchap_digests": [ 00:15:42.327 "sha256", 00:15:42.327 "sha384", 00:15:42.327 "sha512" 00:15:42.327 ], 00:15:42.327 "dhchap_dhgroups": [ 00:15:42.327 "null", 00:15:42.327 "ffdhe2048", 00:15:42.327 "ffdhe3072", 00:15:42.327 "ffdhe4096", 00:15:42.327 "ffdhe6144", 00:15:42.327 "ffdhe8192" 00:15:42.327 ] 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "bdev_nvme_attach_controller", 00:15:42.327 "params": { 00:15:42.327 "name": "nvme0", 00:15:42.327 "trtype": "TCP", 00:15:42.327 "adrfam": "IPv4", 00:15:42.327 "traddr": "10.0.0.3", 00:15:42.327 "trsvcid": "4420", 00:15:42.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.327 "prchk_reftag": false, 00:15:42.327 "prchk_guard": false, 00:15:42.327 "ctrlr_loss_timeout_sec": 0, 00:15:42.327 "reconnect_delay_sec": 0, 00:15:42.327 "fast_io_fail_timeout_sec": 0, 00:15:42.327 "psk": "key0", 00:15:42.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.327 "hdgst": false, 00:15:42.327 "ddgst": false, 00:15:42.327 "multipath": "multipath" 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "bdev_nvme_set_hotplug", 00:15:42.327 "params": { 00:15:42.327 "period_us": 100000, 00:15:42.327 "enable": false 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "bdev_enable_histogram", 00:15:42.327 "params": { 00:15:42.327 "name": "nvme0n1", 00:15:42.327 "enable": true 00:15:42.327 } 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "method": "bdev_wait_for_examine" 00:15:42.327 } 00:15:42.327 ] 00:15:42.327 }, 00:15:42.327 { 00:15:42.327 "subsystem": "nbd", 00:15:42.327 "config": [] 00:15:42.327 } 00:15:42.327 ] 00:15:42.327 }' 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72250 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72250 ']' 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72250 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72250 00:15:42.327 killing process with pid 72250 00:15:42.327 Received shutdown signal, test time was about 1.000000 seconds 00:15:42.327 00:15:42.327 Latency(us) 00:15:42.327 [2024-12-06T09:53:07.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.327 [2024-12-06T09:53:07.599Z] =================================================================================================================== 00:15:42.327 [2024-12-06T09:53:07.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72250' 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72250 00:15:42.327 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72250 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72231 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72231 ']' 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72231 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72231 00:15:42.586 killing process with pid 72231 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72231' 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72231 00:15:42.586 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72231 00:15:42.845 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:42.845 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.845 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.845 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:42.845 "subsystems": [ 00:15:42.845 { 00:15:42.845 "subsystem": "keyring", 00:15:42.845 "config": [ 00:15:42.845 { 00:15:42.845 "method": "keyring_file_add_key", 00:15:42.845 "params": { 00:15:42.845 "name": "key0", 00:15:42.845 "path": "/tmp/tmp.kfJwOpD591" 00:15:42.845 } 00:15:42.845 } 00:15:42.845 ] 00:15:42.845 }, 00:15:42.845 { 00:15:42.845 "subsystem": "iobuf", 00:15:42.845 "config": [ 00:15:42.845 { 00:15:42.845 "method": "iobuf_set_options", 00:15:42.845 "params": { 00:15:42.845 "small_pool_count": 8192, 00:15:42.845 "large_pool_count": 1024, 00:15:42.845 "small_bufsize": 8192, 00:15:42.845 "large_bufsize": 135168, 00:15:42.845 "enable_numa": false 00:15:42.845 } 00:15:42.845 } 00:15:42.845 ] 00:15:42.845 }, 00:15:42.845 { 00:15:42.845 "subsystem": "sock", 00:15:42.845 "config": [ 00:15:42.845 { 00:15:42.845 "method": "sock_set_default_impl", 00:15:42.845 "params": { 00:15:42.845 "impl_name": "uring" 00:15:42.845 } 00:15:42.845 }, 00:15:42.845 { 00:15:42.845 "method": "sock_impl_set_options", 00:15:42.845 "params": { 00:15:42.845 "impl_name": "ssl", 00:15:42.845 "recv_buf_size": 4096, 00:15:42.845 "send_buf_size": 4096, 00:15:42.845 "enable_recv_pipe": true, 00:15:42.845 "enable_quickack": false, 00:15:42.845 "enable_placement_id": 0, 00:15:42.845 "enable_zerocopy_send_server": true, 00:15:42.845 "enable_zerocopy_send_client": false, 00:15:42.845 "zerocopy_threshold": 0, 00:15:42.845 "tls_version": 0, 00:15:42.845 "enable_ktls": false 00:15:42.845 } 00:15:42.845 }, 00:15:42.845 { 00:15:42.845 "method": "sock_impl_set_options", 00:15:42.845 "params": { 00:15:42.845 "impl_name": "posix", 00:15:42.845 "recv_buf_size": 2097152, 00:15:42.845 "send_buf_size": 2097152, 00:15:42.845 "enable_recv_pipe": true, 00:15:42.845 "enable_quickack": false, 00:15:42.845 "enable_placement_id": 0, 00:15:42.845 "enable_zerocopy_send_server": true, 00:15:42.845 "enable_zerocopy_send_client": false, 00:15:42.845 "zerocopy_threshold": 0, 00:15:42.845 "tls_version": 0, 00:15:42.845 "enable_ktls": false 00:15:42.845 } 00:15:42.845 }, 00:15:42.845 { 00:15:42.845 "method": "sock_impl_set_options", 00:15:42.845 "params": { 00:15:42.845 "impl_name": "uring", 00:15:42.845 "recv_buf_size": 2097152, 00:15:42.845 "send_buf_size": 2097152, 00:15:42.845 "enable_recv_pipe": true, 00:15:42.845 "enable_quickack": false, 00:15:42.845 "enable_placement_id": 0, 00:15:42.845 "enable_zerocopy_send_server": false, 00:15:42.846 "enable_zerocopy_send_client": false, 00:15:42.846 "zerocopy_threshold": 0, 00:15:42.846 "tls_version": 0, 00:15:42.846 "enable_ktls": false 00:15:42.846 } 00:15:42.846 } 00:15:42.846 ] 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "subsystem": "vmd", 00:15:42.846 "config": [] 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "subsystem": "accel", 00:15:42.846 "config": [ 00:15:42.846 { 00:15:42.846 "method": "accel_set_options", 00:15:42.846 "params": { 00:15:42.846 "small_cache_size": 128, 00:15:42.846 "large_cache_size": 16, 00:15:42.846 "task_count": 2048, 00:15:42.846 "sequence_count": 2048, 00:15:42.846 "buf_count": 2048 00:15:42.846 } 00:15:42.846 } 00:15:42.846 ] 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "subsystem": "bdev", 00:15:42.846 "config": [ 00:15:42.846 { 00:15:42.846 "method": "bdev_set_options", 00:15:42.846 "params": { 00:15:42.846 "bdev_io_pool_size": 65535, 00:15:42.846 "bdev_io_cache_size": 256, 00:15:42.846 "bdev_auto_examine": true, 00:15:42.846 "iobuf_small_cache_size": 128, 00:15:42.846 "iobuf_large_cache_size": 16 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "bdev_raid_set_options", 00:15:42.846 "params": { 00:15:42.846 "process_window_size_kb": 1024, 00:15:42.846 "process_max_bandwidth_mb_sec": 0 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "bdev_iscsi_set_options", 00:15:42.846 "params": { 00:15:42.846 "timeout_sec": 30 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "bdev_nvme_set_options", 00:15:42.846 "params": { 00:15:42.846 "action_on_timeout": "none", 00:15:42.846 "timeout_us": 0, 00:15:42.846 "timeout_admin_us": 0, 00:15:42.846 "keep_alive_timeout_ms": 10000, 00:15:42.846 "arbitration_burst": 0, 00:15:42.846 "low_priority_weight": 0, 00:15:42.846 "medium_priority_weight": 0, 00:15:42.846 "high_priority_weight": 0, 00:15:42.846 "nvme_adminq_poll_period_us": 10000, 00:15:42.846 "nvme_ioq_poll_period_us": 0, 00:15:42.846 "io_queue_requests": 0, 00:15:42.846 "delay_cmd_submit": true, 00:15:42.846 "transport_retry_count": 4, 00:15:42.846 "bdev_retry_count": 3, 00:15:42.846 "transport_ack_timeout": 0, 00:15:42.846 "ctrlr_loss_timeout_sec": 0, 00:15:42.846 "reconnect_delay_sec": 0, 00:15:42.846 "fast_io_fail_timeout_sec": 0, 00:15:42.846 "disable_auto_failback": false, 00:15:42.846 "generate_uuids": false, 00:15:42.846 "transport_tos": 0, 00:15:42.846 "nvme_error_stat": false, 00:15:42.846 "rdma_srq_size": 0, 00:15:42.846 "io_path_stat": false, 00:15:42.846 "allow_accel_sequence": false, 00:15:42.846 "rdma_max_cq_size": 0, 00:15:42.846 "rdma_cm_event_timeout_ms": 0, 00:15:42.846 "dhchap_digests": [ 00:15:42.846 "sha256", 00:15:42.846 "sha384", 00:15:42.846 "sha512" 00:15:42.846 ], 00:15:42.846 "dhchap_dhgroups": [ 00:15:42.846 "null", 00:15:42.846 "ffdhe2048", 00:15:42.846 "ffdhe3072", 00:15:42.846 "ffdhe4096", 00:15:42.846 "ffdhe6144", 00:15:42.846 "ffdhe8192" 00:15:42.846 ] 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "bdev_nvme_set_hotplug", 00:15:42.846 "params": { 00:15:42.846 "period_us": 100000, 00:15:42.846 "enable": false 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "bdev_malloc_create", 00:15:42.846 "params": { 00:15:42.846 "name": "malloc0", 00:15:42.846 "num_blocks": 8192, 00:15:42.846 "block_size": 4096, 00:15:42.846 "physical_block_size": 4096, 00:15:42.846 "uuid": "59183236-b90f-4aa7-a1ae-647a48cfcbf6", 00:15:42.846 "optimal_io_boundary": 0, 00:15:42.846 "md_size": 0, 00:15:42.846 "dif_type": 0, 00:15:42.846 "dif_is_head_of_md": false, 00:15:42.846 "dif_pi_format": 0 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "bdev_wait_for_examine" 00:15:42.846 } 00:15:42.846 ] 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "subsystem": "nbd", 00:15:42.846 "config": [] 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "subsystem": "scheduler", 00:15:42.846 "config": [ 00:15:42.846 { 00:15:42.846 "method": "framework_set_scheduler", 00:15:42.846 "params": { 00:15:42.846 "name": "static" 00:15:42.846 } 00:15:42.846 } 00:15:42.846 ] 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "subsystem": "nvmf", 00:15:42.846 "config": [ 00:15:42.846 { 00:15:42.846 "method": "nvmf_set_config", 00:15:42.846 "params": { 00:15:42.846 "discovery_filter": "match_any", 00:15:42.846 "admin_cmd_passthru": { 00:15:42.846 "identify_ctrlr": false 00:15:42.846 }, 00:15:42.846 "dhchap_digests": [ 00:15:42.846 "sha256", 00:15:42.846 "sha384", 00:15:42.846 "sha512" 00:15:42.846 ], 00:15:42.846 "dhchap_dhgroups": [ 00:15:42.846 "null", 00:15:42.846 "ffdhe2048", 00:15:42.846 "ffdhe3072", 00:15:42.846 "ffdhe4096", 00:15:42.846 "ffdhe6144", 00:15:42.846 "ffdhe8192" 00:15:42.846 ] 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "nvmf_set_max_subsystems", 00:15:42.846 "params": { 00:15:42.846 "max_subsystems": 1024 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "nvmf_set_crdt", 00:15:42.846 "params": { 00:15:42.846 "crdt1": 0, 00:15:42.846 "crdt2": 0, 00:15:42.846 "crdt3": 0 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "nvmf_create_transport", 00:15:42.846 "params": { 00:15:42.846 "trtype": "TCP", 00:15:42.846 "max_queue_depth": 128, 00:15:42.846 "max_io_qpairs_per_ctrlr": 127, 00:15:42.846 "in_capsule_data_size": 4096, 00:15:42.846 "max_io_size": 131072, 00:15:42.846 "io_unit_size": 131072, 00:15:42.846 "max_aq_depth": 128, 00:15:42.846 "num_shared_buffers": 511, 00:15:42.846 "buf_cache_size": 4294967295, 00:15:42.846 "dif_insert_or_strip": false, 00:15:42.846 "zcopy": false, 00:15:42.846 "c2h_success": false, 00:15:42.846 "sock_priority": 0, 00:15:42.846 "abort_timeout_sec": 1, 00:15:42.846 "ack_timeout": 0, 00:15:42.846 "data_wr_pool_size": 0 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "nvmf_create_subsystem", 00:15:42.846 "params": { 00:15:42.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.846 "allow_any_host": false, 00:15:42.846 "serial_number": "00000000000000000000", 00:15:42.846 "model_number": "SPDK bdev Controller", 00:15:42.846 "max_namespaces": 32, 00:15:42.846 "min_cntlid": 1, 00:15:42.846 "max_cntlid": 65519, 00:15:42.846 "ana_reporting": false 00:15:42.846 } 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "method": "nvmf_subsystem_add_host", 00:15:42.847 "params": { 00:15:42.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.847 "host": "nqn.2016-06.io.spdk:host1", 00:15:42.847 "psk": "key0" 00:15:42.847 } 00:15:42.847 }, 00:15:42.847 { 00:15:42.847 "method": "nvmf_subsystem_add_ns", 00:15:42.847 "params": { 00:15:42.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.847 "namespace": { 00:15:42.847 "nsid": 1, 00:15:42.847 "bdev_name": "malloc0", 00:15:42.847 "nguid": "59183236B90F4AA7A1AE647A48CFCBF6", 00:15:42.847 "uuid": "59183236-b90f-4aa7-a1ae-647a48cfcbf6", 00:15:42.847 "no_auto_visible": false 00:15:42.847 } 00:15:42.847 } 00:15:42.847 }, 00:15:42.847 { 00:15:42.847 "method": "nvmf_subsystem_add_listener", 00:15:42.847 "params": { 00:15:42.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.847 "listen_address": { 00:15:42.847 "trtype": "TCP", 00:15:42.847 "adrfam": "IPv4", 00:15:42.847 "traddr": "10.0.0.3", 00:15:42.847 "trsvcid": "4420" 00:15:42.847 }, 00:15:42.847 "secure_channel": false, 00:15:42.847 "sock_impl": "ssl" 00:15:42.847 } 00:15:42.847 } 00:15:42.847 ] 00:15:42.847 } 00:15:42.847 ] 00:15:42.847 }' 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72316 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72316 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72316 ']' 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.847 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.847 [2024-12-06 09:53:08.024455] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:42.847 [2024-12-06 09:53:08.025031] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.106 [2024-12-06 09:53:08.170940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.106 [2024-12-06 09:53:08.220786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.106 [2024-12-06 09:53:08.220842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.106 [2024-12-06 09:53:08.220869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.106 [2024-12-06 09:53:08.220876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.106 [2024-12-06 09:53:08.220883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.106 [2024-12-06 09:53:08.221296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.365 [2024-12-06 09:53:08.395532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.365 [2024-12-06 09:53:08.480229] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.365 [2024-12-06 09:53:08.512228] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:43.365 [2024-12-06 09:53:08.512782] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72348 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72348 /var/tmp/bdevperf.sock 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72348 ']' 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:43.931 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:43.931 "subsystems": [ 00:15:43.931 { 00:15:43.931 "subsystem": "keyring", 00:15:43.931 "config": [ 00:15:43.931 { 00:15:43.931 "method": "keyring_file_add_key", 00:15:43.931 "params": { 00:15:43.931 "name": "key0", 00:15:43.931 "path": "/tmp/tmp.kfJwOpD591" 00:15:43.931 } 00:15:43.931 } 00:15:43.931 ] 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "subsystem": "iobuf", 00:15:43.931 "config": [ 00:15:43.931 { 00:15:43.931 "method": "iobuf_set_options", 00:15:43.931 "params": { 00:15:43.931 "small_pool_count": 8192, 00:15:43.931 "large_pool_count": 1024, 00:15:43.931 "small_bufsize": 8192, 00:15:43.931 "large_bufsize": 135168, 00:15:43.931 "enable_numa": false 00:15:43.931 } 00:15:43.931 } 00:15:43.931 ] 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "subsystem": "sock", 00:15:43.931 "config": [ 00:15:43.931 { 00:15:43.931 "method": "sock_set_default_impl", 00:15:43.931 "params": { 00:15:43.931 "impl_name": "uring" 00:15:43.931 } 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "method": "sock_impl_set_options", 00:15:43.931 "params": { 00:15:43.931 "impl_name": "ssl", 00:15:43.931 "recv_buf_size": 4096, 00:15:43.931 "send_buf_size": 4096, 00:15:43.931 "enable_recv_pipe": true, 00:15:43.931 "enable_quickack": false, 00:15:43.931 "enable_placement_id": 0, 00:15:43.931 "enable_zerocopy_send_server": true, 00:15:43.931 "enable_zerocopy_send_client": false, 00:15:43.931 "zerocopy_threshold": 0, 00:15:43.931 "tls_version": 0, 00:15:43.931 "enable_ktls": false 00:15:43.931 } 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "method": "sock_impl_set_options", 00:15:43.931 "params": { 00:15:43.931 "impl_name": "posix", 00:15:43.931 "recv_buf_size": 2097152, 00:15:43.931 "send_buf_size": 2097152, 00:15:43.931 "enable_recv_pipe": true, 00:15:43.931 "enable_quickack": false, 00:15:43.931 "enable_placement_id": 0, 00:15:43.931 "enable_zerocopy_send_server": true, 00:15:43.931 "enable_zerocopy_send_client": false, 00:15:43.931 "zerocopy_threshold": 0, 00:15:43.931 "tls_version": 0, 00:15:43.931 "enable_ktls": false 00:15:43.931 } 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "method": "sock_impl_set_options", 00:15:43.931 "params": { 00:15:43.931 "impl_name": "uring", 00:15:43.931 "recv_buf_size": 2097152, 00:15:43.931 "send_buf_size": 2097152, 00:15:43.931 "enable_recv_pipe": true, 00:15:43.931 "enable_quickack": false, 00:15:43.931 "enable_placement_id": 0, 00:15:43.931 "enable_zerocopy_send_server": false, 00:15:43.931 "enable_zerocopy_send_client": false, 00:15:43.931 "zerocopy_threshold": 0, 00:15:43.931 "tls_version": 0, 00:15:43.931 "enable_ktls": false 00:15:43.931 } 00:15:43.931 } 00:15:43.931 ] 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "subsystem": "vmd", 00:15:43.931 "config": [] 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "subsystem": "accel", 00:15:43.931 "config": [ 00:15:43.931 { 00:15:43.931 "method": "accel_set_options", 00:15:43.931 "params": { 00:15:43.931 "small_cache_size": 128, 00:15:43.931 "large_cache_size": 16, 00:15:43.931 "task_count": 2048, 00:15:43.931 "sequence_count": 2048, 00:15:43.931 "buf_count": 2048 00:15:43.931 } 00:15:43.931 } 00:15:43.931 ] 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "subsystem": "bdev", 00:15:43.931 "config": [ 00:15:43.931 { 00:15:43.931 "method": "bdev_set_options", 00:15:43.931 "params": { 00:15:43.931 "bdev_io_pool_size": 65535, 00:15:43.931 "bdev_io_cache_size": 256, 00:15:43.931 "bdev_auto_examine": true, 00:15:43.931 "iobuf_small_cache_size": 128, 00:15:43.931 "iobuf_large_cache_size": 16 00:15:43.931 } 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "method": "bdev_raid_set_options", 00:15:43.931 "params": { 00:15:43.931 "process_window_size_kb": 1024, 00:15:43.931 "process_max_bandwidth_mb_sec": 0 00:15:43.931 } 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "method": "bdev_iscsi_set_options", 00:15:43.931 "params": { 00:15:43.931 "timeout_sec": 30 00:15:43.931 } 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "method": "bdev_nvme_set_options", 00:15:43.931 "params": { 00:15:43.931 "action_on_timeout": "none", 00:15:43.931 "timeout_us": 0, 00:15:43.931 "timeout_admin_us": 0, 00:15:43.931 "keep_alive_timeout_ms": 10000, 00:15:43.931 "arbitration_burst": 0, 00:15:43.931 "low_priority_weight": 0, 00:15:43.931 "medium_priority_weight": 0, 00:15:43.931 "high_priority_weight": 0, 00:15:43.931 "nvme_adminq_poll_period_us": 10000, 00:15:43.931 "nvme_ioq_poll_period_us": 0, 00:15:43.931 "io_queue_requests": 512, 00:15:43.931 "delay_cmd_submit": true, 00:15:43.931 "transport_retry_count": 4, 00:15:43.931 "bdev_retry_count": 3, 00:15:43.931 "transport_ack_timeout": 0, 00:15:43.931 "ctrlr_loss_timeout_sec": 0, 00:15:43.931 "reconnect_delay_sec": 0, 00:15:43.931 "fast_io_fail_timeout_sec": 0, 00:15:43.931 "disable_auto_failback": false, 00:15:43.931 "generate_uuids": false, 00:15:43.931 "transport_tos": 0, 00:15:43.931 "nvme_error_stat": false, 00:15:43.931 "rdma_srq_size": 0, 00:15:43.931 "io_path_stat": false, 00:15:43.931 "allow_accel_sequence": false, 00:15:43.931 "rdma_max_cq_size": 0, 00:15:43.931 "rdma_cm_event_timeout_ms": 0, 00:15:43.931 "dhchap_digests": [ 00:15:43.931 "sha256", 00:15:43.931 "sha384", 00:15:43.931 "sha512" 00:15:43.931 ], 00:15:43.931 "dhchap_dhgroups": [ 00:15:43.931 "null", 00:15:43.931 "ffdhe2048", 00:15:43.931 "ffdhe3072", 00:15:43.931 "ffdhe4096", 00:15:43.931 "ffdhe6144", 00:15:43.931 "ffdhe8192" 00:15:43.931 ] 00:15:43.931 } 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "method": "bdev_nvme_attach_controller", 00:15:43.931 "params": { 00:15:43.931 "name": "nvme0", 00:15:43.931 "trtype": "TCP", 00:15:43.931 "adrfam": "IPv4", 00:15:43.931 "traddr": "10.0.0.3", 00:15:43.931 "trsvcid": "4420", 00:15:43.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.931 "prchk_reftag": false, 00:15:43.931 "prchk_guard": false, 00:15:43.931 "ctrlr_loss_timeout_sec": 0, 00:15:43.931 "reconnect_delay_sec": 0, 00:15:43.931 "fast_io_fail_timeout_sec": 0, 00:15:43.931 "psk": "key0", 00:15:43.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:43.931 "hdgst": false, 00:15:43.931 "ddgst": false, 00:15:43.931 "multipath": "multipath" 00:15:43.931 } 00:15:43.931 }, 00:15:43.931 { 00:15:43.931 "method": "bdev_nvme_set_hotplug", 00:15:43.931 "params": { 00:15:43.931 "period_us": 100000, 00:15:43.931 "enable": false 00:15:43.932 } 00:15:43.932 }, 00:15:43.932 { 00:15:43.932 "method": "bdev_enable_histogram", 00:15:43.932 "params": { 00:15:43.932 "name": "nvme0n1", 00:15:43.932 "enable": true 00:15:43.932 } 00:15:43.932 }, 00:15:43.932 { 00:15:43.932 "method": "bdev_wait_for_examine" 00:15:43.932 } 00:15:43.932 ] 00:15:43.932 }, 00:15:43.932 { 00:15:43.932 "subsystem": "nbd", 00:15:43.932 "config": [] 00:15:43.932 } 00:15:43.932 ] 00:15:43.932 }' 00:15:43.932 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:43.932 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:43.932 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.932 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.932 [2024-12-06 09:53:09.126289] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:43.932 [2024-12-06 09:53:09.126862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72348 ] 00:15:44.230 [2024-12-06 09:53:09.283600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.231 [2024-12-06 09:53:09.364603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.489 [2024-12-06 09:53:09.529339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.489 [2024-12-06 09:53:09.598998] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:45.055 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.055 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:45.055 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:45.055 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:45.314 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.314 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.314 Running I/O for 1 seconds... 00:15:46.250 3689.00 IOPS, 14.41 MiB/s 00:15:46.250 Latency(us) 00:15:46.250 [2024-12-06T09:53:11.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.250 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.250 Verification LBA range: start 0x0 length 0x2000 00:15:46.250 nvme0n1 : 1.02 3755.86 14.67 0.00 0.00 33788.57 5272.67 27525.12 00:15:46.250 [2024-12-06T09:53:11.522Z] =================================================================================================================== 00:15:46.250 [2024-12-06T09:53:11.522Z] Total : 3755.86 14.67 0.00 0.00 33788.57 5272.67 27525.12 00:15:46.250 { 00:15:46.250 "results": [ 00:15:46.250 { 00:15:46.250 "job": "nvme0n1", 00:15:46.250 "core_mask": "0x2", 00:15:46.250 "workload": "verify", 00:15:46.250 "status": "finished", 00:15:46.250 "verify_range": { 00:15:46.250 "start": 0, 00:15:46.250 "length": 8192 00:15:46.250 }, 00:15:46.250 "queue_depth": 128, 00:15:46.250 "io_size": 4096, 00:15:46.250 "runtime": 1.016546, 00:15:46.250 "iops": 3755.8556130268576, 00:15:46.250 "mibps": 14.671310988386162, 00:15:46.250 "io_failed": 0, 00:15:46.250 "io_timeout": 0, 00:15:46.250 "avg_latency_us": 33788.56894899757, 00:15:46.250 "min_latency_us": 5272.669090909091, 00:15:46.250 "max_latency_us": 27525.12 00:15:46.250 } 00:15:46.250 ], 00:15:46.250 "core_count": 1 00:15:46.250 } 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:46.509 nvmf_trace.0 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72348 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72348 ']' 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72348 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.509 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72348 00:15:46.509 killing process with pid 72348 00:15:46.509 Received shutdown signal, test time was about 1.000000 seconds 00:15:46.509 00:15:46.510 Latency(us) 00:15:46.510 [2024-12-06T09:53:11.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.510 [2024-12-06T09:53:11.782Z] =================================================================================================================== 00:15:46.510 [2024-12-06T09:53:11.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.510 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:46.510 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:46.510 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72348' 00:15:46.510 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72348 00:15:46.510 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72348 00:15:46.769 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:46.769 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:46.769 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:46.769 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:46.769 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:46.769 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:46.769 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:46.769 rmmod nvme_tcp 00:15:46.769 rmmod nvme_fabrics 00:15:47.028 rmmod nvme_keyring 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72316 ']' 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72316 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72316 ']' 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72316 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72316 00:15:47.028 killing process with pid 72316 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72316' 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72316 00:15:47.028 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72316 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.288 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.mj9UsBQ8gX /tmp/tmp.TJwBmwkE8z /tmp/tmp.kfJwOpD591 00:15:47.557 ************************************ 00:15:47.557 END TEST nvmf_tls 00:15:47.557 ************************************ 00:15:47.557 00:15:47.557 real 1m25.292s 00:15:47.557 user 2m12.898s 00:15:47.557 sys 0m30.818s 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.557 ************************************ 00:15:47.557 START TEST nvmf_fips 00:15:47.557 ************************************ 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.557 * Looking for test storage... 00:15:47.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:47.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.557 --rc genhtml_branch_coverage=1 00:15:47.557 --rc genhtml_function_coverage=1 00:15:47.557 --rc genhtml_legend=1 00:15:47.557 --rc geninfo_all_blocks=1 00:15:47.557 --rc geninfo_unexecuted_blocks=1 00:15:47.557 00:15:47.557 ' 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:47.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.557 --rc genhtml_branch_coverage=1 00:15:47.557 --rc genhtml_function_coverage=1 00:15:47.557 --rc genhtml_legend=1 00:15:47.557 --rc geninfo_all_blocks=1 00:15:47.557 --rc geninfo_unexecuted_blocks=1 00:15:47.557 00:15:47.557 ' 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:47.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.557 --rc genhtml_branch_coverage=1 00:15:47.557 --rc genhtml_function_coverage=1 00:15:47.557 --rc genhtml_legend=1 00:15:47.557 --rc geninfo_all_blocks=1 00:15:47.557 --rc geninfo_unexecuted_blocks=1 00:15:47.557 00:15:47.557 ' 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:47.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.557 --rc genhtml_branch_coverage=1 00:15:47.557 --rc genhtml_function_coverage=1 00:15:47.557 --rc genhtml_legend=1 00:15:47.557 --rc geninfo_all_blocks=1 00:15:47.557 --rc geninfo_unexecuted_blocks=1 00:15:47.557 00:15:47.557 ' 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.557 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.558 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.558 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.558 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.558 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.558 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.558 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.558 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.831 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.832 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.832 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:47.833 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.833 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:47.833 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:47.833 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:47.833 Error setting digest 00:15:47.833 40F29F97DC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:47.833 40F29F97DC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:47.833 Cannot find device "nvmf_init_br" 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:47.833 Cannot find device "nvmf_init_br2" 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:47.833 Cannot find device "nvmf_tgt_br" 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.833 Cannot find device "nvmf_tgt_br2" 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:47.833 Cannot find device "nvmf_init_br" 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:47.833 Cannot find device "nvmf_init_br2" 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:47.833 Cannot find device "nvmf_tgt_br" 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:47.833 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.092 Cannot find device "nvmf_tgt_br2" 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.092 Cannot find device "nvmf_br" 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.092 Cannot find device "nvmf_init_if" 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:48.092 Cannot find device "nvmf_init_if2" 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.092 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.093 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:48.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:48.352 00:15:48.352 --- 10.0.0.3 ping statistics --- 00:15:48.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.352 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:48.352 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:48.352 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:15:48.352 00:15:48.352 --- 10.0.0.4 ping statistics --- 00:15:48.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.352 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:48.352 00:15:48.352 --- 10.0.0.1 ping statistics --- 00:15:48.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.352 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:48.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:48.352 00:15:48.352 --- 10.0.0.2 ping statistics --- 00:15:48.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.352 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:48.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72664 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72664 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72664 ']' 00:15:48.352 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.353 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.353 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:48.353 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.353 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.353 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:48.353 [2024-12-06 09:53:13.555643] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:48.353 [2024-12-06 09:53:13.555752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.612 [2024-12-06 09:53:13.712354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.612 [2024-12-06 09:53:13.787485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.612 [2024-12-06 09:53:13.787585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.612 [2024-12-06 09:53:13.787604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.612 [2024-12-06 09:53:13.787617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.612 [2024-12-06 09:53:13.787627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.612 [2024-12-06 09:53:13.788214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.612 [2024-12-06 09:53:13.869241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.1Iu 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.1Iu 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.1Iu 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.1Iu 00:15:49.549 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:49.807 [2024-12-06 09:53:14.915293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.807 [2024-12-06 09:53:14.931175] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:49.807 [2024-12-06 09:53:14.931442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:49.807 malloc0 00:15:49.807 09:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72707 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72707 /var/tmp/bdevperf.sock 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72707 ']' 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.807 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:50.065 [2024-12-06 09:53:15.088030] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:50.065 [2024-12-06 09:53:15.088321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72707 ] 00:15:50.065 [2024-12-06 09:53:15.240759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.065 [2024-12-06 09:53:15.299540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.324 [2024-12-06 09:53:15.358774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:50.324 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.324 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:50.324 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.1Iu 00:15:50.582 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:50.841 [2024-12-06 09:53:15.968394] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:50.841 TLSTESTn1 00:15:50.841 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.100 Running I/O for 10 seconds... 00:15:52.972 2944.00 IOPS, 11.50 MiB/s [2024-12-06T09:53:19.182Z] 2944.00 IOPS, 11.50 MiB/s [2024-12-06T09:53:20.561Z] 2944.00 IOPS, 11.50 MiB/s [2024-12-06T09:53:21.496Z] 2976.00 IOPS, 11.62 MiB/s [2024-12-06T09:53:22.430Z] 3020.80 IOPS, 11.80 MiB/s [2024-12-06T09:53:23.364Z] 3012.67 IOPS, 11.77 MiB/s [2024-12-06T09:53:24.301Z] 3004.43 IOPS, 11.74 MiB/s [2024-12-06T09:53:25.236Z] 2976.00 IOPS, 11.62 MiB/s [2024-12-06T09:53:26.609Z] 2962.33 IOPS, 11.57 MiB/s [2024-12-06T09:53:26.609Z] 2982.40 IOPS, 11.65 MiB/s 00:16:01.337 Latency(us) 00:16:01.337 [2024-12-06T09:53:26.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.337 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:01.337 Verification LBA range: start 0x0 length 0x2000 00:16:01.337 TLSTESTn1 : 10.03 2987.01 11.67 0.00 0.00 42776.89 7506.85 28716.68 00:16:01.337 [2024-12-06T09:53:26.609Z] =================================================================================================================== 00:16:01.337 [2024-12-06T09:53:26.609Z] Total : 2987.01 11.67 0.00 0.00 42776.89 7506.85 28716.68 00:16:01.337 { 00:16:01.337 "results": [ 00:16:01.337 { 00:16:01.337 "job": "TLSTESTn1", 00:16:01.337 "core_mask": "0x4", 00:16:01.337 "workload": "verify", 00:16:01.337 "status": "finished", 00:16:01.337 "verify_range": { 00:16:01.337 "start": 0, 00:16:01.337 "length": 8192 00:16:01.337 }, 00:16:01.337 "queue_depth": 128, 00:16:01.337 "io_size": 4096, 00:16:01.337 "runtime": 10.027416, 00:16:01.337 "iops": 2987.0108111601235, 00:16:01.337 "mibps": 11.668010981094232, 00:16:01.337 "io_failed": 0, 00:16:01.337 "io_timeout": 0, 00:16:01.337 "avg_latency_us": 42776.888391608394, 00:16:01.337 "min_latency_us": 7506.850909090909, 00:16:01.337 "max_latency_us": 28716.683636363636 00:16:01.337 } 00:16:01.337 ], 00:16:01.337 "core_count": 1 00:16:01.337 } 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:01.337 nvmf_trace.0 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72707 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72707 ']' 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72707 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72707 00:16:01.337 killing process with pid 72707 00:16:01.337 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.337 00:16:01.337 Latency(us) 00:16:01.337 [2024-12-06T09:53:26.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.337 [2024-12-06T09:53:26.609Z] =================================================================================================================== 00:16:01.337 [2024-12-06T09:53:26.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72707' 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72707 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72707 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.337 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.596 rmmod nvme_tcp 00:16:01.596 rmmod nvme_fabrics 00:16:01.596 rmmod nvme_keyring 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72664 ']' 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72664 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72664 ']' 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72664 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72664 00:16:01.596 killing process with pid 72664 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72664' 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72664 00:16:01.596 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72664 00:16:01.854 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:01.854 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:01.854 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:01.854 09:53:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:01.854 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.1Iu 00:16:02.113 00:16:02.113 real 0m14.604s 00:16:02.113 user 0m19.025s 00:16:02.113 sys 0m6.364s 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:02.113 ************************************ 00:16:02.113 END TEST nvmf_fips 00:16:02.113 ************************************ 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.113 ************************************ 00:16:02.113 START TEST nvmf_control_msg_list 00:16:02.113 ************************************ 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:02.113 * Looking for test storage... 00:16:02.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:16:02.113 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:02.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.373 --rc genhtml_branch_coverage=1 00:16:02.373 --rc genhtml_function_coverage=1 00:16:02.373 --rc genhtml_legend=1 00:16:02.373 --rc geninfo_all_blocks=1 00:16:02.373 --rc geninfo_unexecuted_blocks=1 00:16:02.373 00:16:02.373 ' 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:02.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.373 --rc genhtml_branch_coverage=1 00:16:02.373 --rc genhtml_function_coverage=1 00:16:02.373 --rc genhtml_legend=1 00:16:02.373 --rc geninfo_all_blocks=1 00:16:02.373 --rc geninfo_unexecuted_blocks=1 00:16:02.373 00:16:02.373 ' 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:02.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.373 --rc genhtml_branch_coverage=1 00:16:02.373 --rc genhtml_function_coverage=1 00:16:02.373 --rc genhtml_legend=1 00:16:02.373 --rc geninfo_all_blocks=1 00:16:02.373 --rc geninfo_unexecuted_blocks=1 00:16:02.373 00:16:02.373 ' 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:02.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.373 --rc genhtml_branch_coverage=1 00:16:02.373 --rc genhtml_function_coverage=1 00:16:02.373 --rc genhtml_legend=1 00:16:02.373 --rc geninfo_all_blocks=1 00:16:02.373 --rc geninfo_unexecuted_blocks=1 00:16:02.373 00:16:02.373 ' 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.373 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:02.374 Cannot find device "nvmf_init_br" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:02.374 Cannot find device "nvmf_init_br2" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:02.374 Cannot find device "nvmf_tgt_br" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.374 Cannot find device "nvmf_tgt_br2" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:02.374 Cannot find device "nvmf_init_br" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:02.374 Cannot find device "nvmf_init_br2" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:02.374 Cannot find device "nvmf_tgt_br" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:02.374 Cannot find device "nvmf_tgt_br2" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:02.374 Cannot find device "nvmf_br" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:02.374 Cannot find device "nvmf_init_if" 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:16:02.374 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:02.632 Cannot find device "nvmf_init_if2" 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:02.633 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.891 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.891 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.891 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:02.891 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:02.892 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.892 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:02.892 00:16:02.892 --- 10.0.0.3 ping statistics --- 00:16:02.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.892 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:02.892 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:02.892 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:16:02.892 00:16:02.892 --- 10.0.0.4 ping statistics --- 00:16:02.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.892 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:02.892 00:16:02.892 --- 10.0.0.1 ping statistics --- 00:16:02.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.892 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:02.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:16:02.892 00:16:02.892 --- 10.0.0.2 ping statistics --- 00:16:02.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.892 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73089 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73089 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73089 ']' 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.892 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:02.892 [2024-12-06 09:53:28.050589] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:02.892 [2024-12-06 09:53:28.050684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.151 [2024-12-06 09:53:28.204528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.151 [2024-12-06 09:53:28.269401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.151 [2024-12-06 09:53:28.269468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.151 [2024-12-06 09:53:28.269482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.151 [2024-12-06 09:53:28.269493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.151 [2024-12-06 09:53:28.269503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.151 [2024-12-06 09:53:28.270005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.151 [2024-12-06 09:53:28.330658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.151 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.151 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:16:03.151 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.151 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.151 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:03.410 [2024-12-06 09:53:28.453025] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:03.410 Malloc0 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:03.410 [2024-12-06 09:53:28.492613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73112 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73113 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73114 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73112 00:16:03.410 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:03.669 [2024-12-06 09:53:28.691304] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:03.669 [2024-12-06 09:53:28.691563] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:03.669 [2024-12-06 09:53:28.701194] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:04.603 Initializing NVMe Controllers 00:16:04.603 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:04.603 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:04.603 Initialization complete. Launching workers. 00:16:04.603 ======================================================== 00:16:04.603 Latency(us) 00:16:04.603 Device Information : IOPS MiB/s Average min max 00:16:04.603 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3748.00 14.64 266.41 143.42 739.40 00:16:04.603 ======================================================== 00:16:04.603 Total : 3748.00 14.64 266.41 143.42 739.40 00:16:04.603 00:16:04.603 Initializing NVMe Controllers 00:16:04.603 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:04.603 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:04.603 Initialization complete. Launching workers. 00:16:04.603 ======================================================== 00:16:04.603 Latency(us) 00:16:04.603 Device Information : IOPS MiB/s Average min max 00:16:04.603 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3727.00 14.56 267.94 179.47 761.70 00:16:04.604 ======================================================== 00:16:04.604 Total : 3727.00 14.56 267.94 179.47 761.70 00:16:04.604 00:16:04.604 Initializing NVMe Controllers 00:16:04.604 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:04.604 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:04.604 Initialization complete. Launching workers. 00:16:04.604 ======================================================== 00:16:04.604 Latency(us) 00:16:04.604 Device Information : IOPS MiB/s Average min max 00:16:04.604 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3768.00 14.72 264.98 109.38 816.88 00:16:04.604 ======================================================== 00:16:04.604 Total : 3768.00 14.72 264.98 109.38 816.88 00:16:04.604 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73113 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73114 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:04.604 rmmod nvme_tcp 00:16:04.604 rmmod nvme_fabrics 00:16:04.604 rmmod nvme_keyring 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73089 ']' 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73089 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73089 ']' 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73089 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.604 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73089 00:16:04.862 killing process with pid 73089 00:16:04.862 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.862 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.862 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73089' 00:16:04.862 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73089 00:16:04.862 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73089 00:16:04.862 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:04.862 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:04.862 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:04.862 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:04.862 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:04.862 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:16:04.862 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:16:04.863 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:04.863 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:04.863 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:04.863 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:04.863 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:04.863 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:05.121 00:16:05.121 real 0m3.028s 00:16:05.121 user 0m4.808s 00:16:05.121 sys 0m1.423s 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:05.121 ************************************ 00:16:05.121 END TEST nvmf_control_msg_list 00:16:05.121 ************************************ 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.121 09:53:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:05.121 ************************************ 00:16:05.121 START TEST nvmf_wait_for_buf 00:16:05.122 ************************************ 00:16:05.122 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:05.381 * Looking for test storage... 00:16:05.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.381 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:05.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.382 --rc genhtml_branch_coverage=1 00:16:05.382 --rc genhtml_function_coverage=1 00:16:05.382 --rc genhtml_legend=1 00:16:05.382 --rc geninfo_all_blocks=1 00:16:05.382 --rc geninfo_unexecuted_blocks=1 00:16:05.382 00:16:05.382 ' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:05.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.382 --rc genhtml_branch_coverage=1 00:16:05.382 --rc genhtml_function_coverage=1 00:16:05.382 --rc genhtml_legend=1 00:16:05.382 --rc geninfo_all_blocks=1 00:16:05.382 --rc geninfo_unexecuted_blocks=1 00:16:05.382 00:16:05.382 ' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:05.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.382 --rc genhtml_branch_coverage=1 00:16:05.382 --rc genhtml_function_coverage=1 00:16:05.382 --rc genhtml_legend=1 00:16:05.382 --rc geninfo_all_blocks=1 00:16:05.382 --rc geninfo_unexecuted_blocks=1 00:16:05.382 00:16:05.382 ' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:05.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.382 --rc genhtml_branch_coverage=1 00:16:05.382 --rc genhtml_function_coverage=1 00:16:05.382 --rc genhtml_legend=1 00:16:05.382 --rc geninfo_all_blocks=1 00:16:05.382 --rc geninfo_unexecuted_blocks=1 00:16:05.382 00:16:05.382 ' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.382 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:05.382 Cannot find device "nvmf_init_br" 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:05.382 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:05.382 Cannot find device "nvmf_init_br2" 00:16:05.383 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:05.383 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:05.383 Cannot find device "nvmf_tgt_br" 00:16:05.383 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:05.383 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.383 Cannot find device "nvmf_tgt_br2" 00:16:05.383 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:05.383 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:05.383 Cannot find device "nvmf_init_br" 00:16:05.383 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:05.383 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:05.641 Cannot find device "nvmf_init_br2" 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:05.641 Cannot find device "nvmf_tgt_br" 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:05.641 Cannot find device "nvmf_tgt_br2" 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:05.641 Cannot find device "nvmf_br" 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:05.641 Cannot find device "nvmf_init_if" 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:05.641 Cannot find device "nvmf_init_if2" 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:05.641 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:05.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:05.900 00:16:05.900 --- 10.0.0.3 ping statistics --- 00:16:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.900 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:05.900 09:53:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:05.900 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:05.900 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:16:05.900 00:16:05.900 --- 10.0.0.4 ping statistics --- 00:16:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.900 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:05.900 00:16:05.900 --- 10.0.0.1 ping statistics --- 00:16:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.900 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:05.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:05.900 00:16:05.900 --- 10.0.0.2 ping statistics --- 00:16:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.900 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73350 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73350 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73350 ']' 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.900 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:05.900 [2024-12-06 09:53:31.113037] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:05.900 [2024-12-06 09:53:31.113139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.158 [2024-12-06 09:53:31.259364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.158 [2024-12-06 09:53:31.300391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.158 [2024-12-06 09:53:31.300443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.158 [2024-12-06 09:53:31.300453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.158 [2024-12-06 09:53:31.300459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.158 [2024-12-06 09:53:31.300465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.158 [2024-12-06 09:53:31.300824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.159 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.159 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:16:06.159 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:06.159 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:06.159 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 [2024-12-06 09:53:31.486662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 Malloc0 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 [2024-12-06 09:53:31.556200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 [2024-12-06 09:53:31.580306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.417 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:06.676 [2024-12-06 09:53:31.778675] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:08.050 Initializing NVMe Controllers 00:16:08.050 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:08.050 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:08.050 Initialization complete. Launching workers. 00:16:08.050 ======================================================== 00:16:08.050 Latency(us) 00:16:08.050 Device Information : IOPS MiB/s Average min max 00:16:08.050 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 508.00 63.50 7913.63 5027.53 10913.99 00:16:08.050 ======================================================== 00:16:08.050 Total : 508.00 63.50 7913.63 5027.53 10913.99 00:16:08.050 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4826 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4826 -eq 0 ]] 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:08.050 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:08.051 rmmod nvme_tcp 00:16:08.051 rmmod nvme_fabrics 00:16:08.051 rmmod nvme_keyring 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73350 ']' 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73350 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73350 ']' 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73350 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73350 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.051 killing process with pid 73350 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73350' 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73350 00:16:08.051 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73350 00:16:08.309 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:08.309 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:08.309 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:08.309 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:08.310 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:08.569 00:16:08.569 real 0m3.357s 00:16:08.569 user 0m2.663s 00:16:08.569 sys 0m0.838s 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:08.569 ************************************ 00:16:08.569 END TEST nvmf_wait_for_buf 00:16:08.569 ************************************ 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.569 ************************************ 00:16:08.569 START TEST nvmf_nsid 00:16:08.569 ************************************ 00:16:08.569 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:08.850 * Looking for test storage... 00:16:08.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:08.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.850 --rc genhtml_branch_coverage=1 00:16:08.850 --rc genhtml_function_coverage=1 00:16:08.850 --rc genhtml_legend=1 00:16:08.850 --rc geninfo_all_blocks=1 00:16:08.850 --rc geninfo_unexecuted_blocks=1 00:16:08.850 00:16:08.850 ' 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:08.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.850 --rc genhtml_branch_coverage=1 00:16:08.850 --rc genhtml_function_coverage=1 00:16:08.850 --rc genhtml_legend=1 00:16:08.850 --rc geninfo_all_blocks=1 00:16:08.850 --rc geninfo_unexecuted_blocks=1 00:16:08.850 00:16:08.850 ' 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:08.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.850 --rc genhtml_branch_coverage=1 00:16:08.850 --rc genhtml_function_coverage=1 00:16:08.850 --rc genhtml_legend=1 00:16:08.850 --rc geninfo_all_blocks=1 00:16:08.850 --rc geninfo_unexecuted_blocks=1 00:16:08.850 00:16:08.850 ' 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:08.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.850 --rc genhtml_branch_coverage=1 00:16:08.850 --rc genhtml_function_coverage=1 00:16:08.850 --rc genhtml_legend=1 00:16:08.850 --rc geninfo_all_blocks=1 00:16:08.850 --rc geninfo_unexecuted_blocks=1 00:16:08.850 00:16:08.850 ' 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:08.850 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.851 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:08.851 Cannot find device "nvmf_init_br" 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:16:08.851 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:08.851 Cannot find device "nvmf_init_br2" 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:08.851 Cannot find device "nvmf_tgt_br" 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.851 Cannot find device "nvmf_tgt_br2" 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:08.851 Cannot find device "nvmf_init_br" 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:08.851 Cannot find device "nvmf_init_br2" 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:08.851 Cannot find device "nvmf_tgt_br" 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:08.851 Cannot find device "nvmf_tgt_br2" 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:08.851 Cannot find device "nvmf_br" 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:16:08.851 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:08.851 Cannot find device "nvmf_init_if" 00:16:08.852 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:16:08.852 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:08.852 Cannot find device "nvmf_init_if2" 00:16:08.852 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:16:08.852 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:09.111 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:09.111 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:16:09.111 00:16:09.111 --- 10.0.0.3 ping statistics --- 00:16:09.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.111 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:09.111 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:09.111 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:16:09.111 00:16:09.111 --- 10.0.0.4 ping statistics --- 00:16:09.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.111 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:09.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:09.111 00:16:09.111 --- 10.0.0.1 ping statistics --- 00:16:09.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.111 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:09.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:09.111 00:16:09.111 --- 10.0.0.2 ping statistics --- 00:16:09.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.111 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.111 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73611 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73611 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73611 ']' 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.370 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:09.370 [2024-12-06 09:53:34.456557] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:09.370 [2024-12-06 09:53:34.456647] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.370 [2024-12-06 09:53:34.602498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.630 [2024-12-06 09:53:34.650895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.630 [2024-12-06 09:53:34.650941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.630 [2024-12-06 09:53:34.650950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.630 [2024-12-06 09:53:34.650958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.630 [2024-12-06 09:53:34.650964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.630 [2024-12-06 09:53:34.651395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.630 [2024-12-06 09:53:34.702820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73631 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=836bb903-9b20-404b-9291-91a1aba24a93 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=791b8f6f-6b01-44a9-a367-d6991c191ab1 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7aa6c7eb-8f89-4d13-98f8-049c117aaf04 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.630 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:09.630 null0 00:16:09.630 null1 00:16:09.630 null2 00:16:09.630 [2024-12-06 09:53:34.870034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.630 [2024-12-06 09:53:34.886426] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:09.630 [2024-12-06 09:53:34.886517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73631 ] 00:16:09.630 [2024-12-06 09:53:34.894185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:09.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:09.889 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.889 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73631 /var/tmp/tgt2.sock 00:16:09.889 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73631 ']' 00:16:09.889 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:09.889 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.889 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:09.889 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.889 09:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:09.889 [2024-12-06 09:53:35.040149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.889 [2024-12-06 09:53:35.099636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.147 [2024-12-06 09:53:35.172729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.147 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.147 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:10.147 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:10.714 [2024-12-06 09:53:35.785741] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.714 [2024-12-06 09:53:35.801859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:16:10.714 nvme0n1 nvme0n2 00:16:10.714 nvme1n1 00:16:10.714 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:10.714 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:10.714 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:10.972 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:10.972 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:10.972 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:10.972 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:10.972 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:10.972 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:10.973 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:10.973 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:10.973 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:10.973 09:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:10.973 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:16:10.973 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:16:10.973 09:53:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 836bb903-9b20-404b-9291-91a1aba24a93 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=836bb9039b20404b929191a1aba24a93 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 836BB9039B20404B929191A1ABA24A93 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 836BB9039B20404B929191A1ABA24A93 == \8\3\6\B\B\9\0\3\9\B\2\0\4\0\4\B\9\2\9\1\9\1\A\1\A\B\A\2\4\A\9\3 ]] 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 791b8f6f-6b01-44a9-a367-d6991c191ab1 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=791b8f6f6b0144a9a367d6991c191ab1 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 791B8F6F6B0144A9A367D6991C191AB1 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 791B8F6F6B0144A9A367D6991C191AB1 == \7\9\1\B\8\F\6\F\6\B\0\1\4\4\A\9\A\3\6\7\D\6\9\9\1\C\1\9\1\A\B\1 ]] 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:11.909 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:12.168 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:12.168 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:12.168 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7aa6c7eb-8f89-4d13-98f8-049c117aaf04 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7aa6c7eb8f894d1398f8049c117aaf04 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7AA6C7EB8F894D1398F8049C117AAF04 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7AA6C7EB8F894D1398F8049C117AAF04 == \7\A\A\6\C\7\E\B\8\F\8\9\4\D\1\3\9\8\F\8\0\4\9\C\1\1\7\A\A\F\0\4 ]] 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73631 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73631 ']' 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73631 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.169 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73631 00:16:12.428 killing process with pid 73631 00:16:12.428 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:12.428 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:12.428 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73631' 00:16:12.428 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73631 00:16:12.428 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73631 00:16:12.997 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:12.997 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:12.997 09:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:12.997 rmmod nvme_tcp 00:16:12.997 rmmod nvme_fabrics 00:16:12.997 rmmod nvme_keyring 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73611 ']' 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73611 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73611 ']' 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73611 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73611 00:16:12.997 killing process with pid 73611 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73611' 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73611 00:16:12.997 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73611 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.257 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:16:13.517 00:16:13.517 real 0m4.789s 00:16:13.517 user 0m7.167s 00:16:13.517 sys 0m1.712s 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:13.517 ************************************ 00:16:13.517 END TEST nvmf_nsid 00:16:13.517 ************************************ 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:13.517 00:16:13.517 real 5m0.809s 00:16:13.517 user 10m20.586s 00:16:13.517 sys 1m12.592s 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.517 09:53:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.517 ************************************ 00:16:13.517 END TEST nvmf_target_extra 00:16:13.517 ************************************ 00:16:13.517 09:53:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:13.517 09:53:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.517 09:53:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.517 09:53:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.517 ************************************ 00:16:13.517 START TEST nvmf_host 00:16:13.517 ************************************ 00:16:13.517 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:13.517 * Looking for test storage... 00:16:13.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:13.517 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:13.517 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:16:13.517 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:13.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.776 --rc genhtml_branch_coverage=1 00:16:13.776 --rc genhtml_function_coverage=1 00:16:13.776 --rc genhtml_legend=1 00:16:13.776 --rc geninfo_all_blocks=1 00:16:13.776 --rc geninfo_unexecuted_blocks=1 00:16:13.776 00:16:13.776 ' 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:13.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.776 --rc genhtml_branch_coverage=1 00:16:13.776 --rc genhtml_function_coverage=1 00:16:13.776 --rc genhtml_legend=1 00:16:13.776 --rc geninfo_all_blocks=1 00:16:13.776 --rc geninfo_unexecuted_blocks=1 00:16:13.776 00:16:13.776 ' 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:13.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.776 --rc genhtml_branch_coverage=1 00:16:13.776 --rc genhtml_function_coverage=1 00:16:13.776 --rc genhtml_legend=1 00:16:13.776 --rc geninfo_all_blocks=1 00:16:13.776 --rc geninfo_unexecuted_blocks=1 00:16:13.776 00:16:13.776 ' 00:16:13.776 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:13.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.776 --rc genhtml_branch_coverage=1 00:16:13.776 --rc genhtml_function_coverage=1 00:16:13.776 --rc genhtml_legend=1 00:16:13.776 --rc geninfo_all_blocks=1 00:16:13.777 --rc geninfo_unexecuted_blocks=1 00:16:13.777 00:16:13.777 ' 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.777 ************************************ 00:16:13.777 START TEST nvmf_identify 00:16:13.777 ************************************ 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:13.777 * Looking for test storage... 00:16:13.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:16:13.777 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.037 --rc genhtml_branch_coverage=1 00:16:14.037 --rc genhtml_function_coverage=1 00:16:14.037 --rc genhtml_legend=1 00:16:14.037 --rc geninfo_all_blocks=1 00:16:14.037 --rc geninfo_unexecuted_blocks=1 00:16:14.037 00:16:14.037 ' 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.037 --rc genhtml_branch_coverage=1 00:16:14.037 --rc genhtml_function_coverage=1 00:16:14.037 --rc genhtml_legend=1 00:16:14.037 --rc geninfo_all_blocks=1 00:16:14.037 --rc geninfo_unexecuted_blocks=1 00:16:14.037 00:16:14.037 ' 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.037 --rc genhtml_branch_coverage=1 00:16:14.037 --rc genhtml_function_coverage=1 00:16:14.037 --rc genhtml_legend=1 00:16:14.037 --rc geninfo_all_blocks=1 00:16:14.037 --rc geninfo_unexecuted_blocks=1 00:16:14.037 00:16:14.037 ' 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.037 --rc genhtml_branch_coverage=1 00:16:14.037 --rc genhtml_function_coverage=1 00:16:14.037 --rc genhtml_legend=1 00:16:14.037 --rc geninfo_all_blocks=1 00:16:14.037 --rc geninfo_unexecuted_blocks=1 00:16:14.037 00:16:14.037 ' 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.037 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:14.038 Cannot find device "nvmf_init_br" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:14.038 Cannot find device "nvmf_init_br2" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:14.038 Cannot find device "nvmf_tgt_br" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.038 Cannot find device "nvmf_tgt_br2" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:14.038 Cannot find device "nvmf_init_br" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:14.038 Cannot find device "nvmf_init_br2" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:14.038 Cannot find device "nvmf_tgt_br" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:14.038 Cannot find device "nvmf_tgt_br2" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:14.038 Cannot find device "nvmf_br" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:14.038 Cannot find device "nvmf_init_if" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:14.038 Cannot find device "nvmf_init_if2" 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.038 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:14.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:14.298 00:16:14.298 --- 10.0.0.3 ping statistics --- 00:16:14.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.298 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:14.298 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:14.298 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:16:14.298 00:16:14.298 --- 10.0.0.4 ping statistics --- 00:16:14.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.298 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:14.298 00:16:14.298 --- 10.0.0.1 ping statistics --- 00:16:14.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.298 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:14.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:16:14.298 00:16:14.298 --- 10.0.0.2 ping statistics --- 00:16:14.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.298 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73988 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73988 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73988 ']' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.298 09:53:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.557 [2024-12-06 09:53:39.626083] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:14.557 [2024-12-06 09:53:39.626200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.557 [2024-12-06 09:53:39.779531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.819 [2024-12-06 09:53:39.846047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.819 [2024-12-06 09:53:39.846121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.819 [2024-12-06 09:53:39.846135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.819 [2024-12-06 09:53:39.846146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.819 [2024-12-06 09:53:39.846166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.819 [2024-12-06 09:53:39.847483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.819 [2024-12-06 09:53:39.847616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.819 [2024-12-06 09:53:39.847689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.819 [2024-12-06 09:53:39.847692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.819 [2024-12-06 09:53:39.910331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.388 [2024-12-06 09:53:40.625415] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:15.388 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.648 Malloc0 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.648 [2024-12-06 09:53:40.741844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.648 [ 00:16:15.648 { 00:16:15.648 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:15.648 "subtype": "Discovery", 00:16:15.648 "listen_addresses": [ 00:16:15.648 { 00:16:15.648 "trtype": "TCP", 00:16:15.648 "adrfam": "IPv4", 00:16:15.648 "traddr": "10.0.0.3", 00:16:15.648 "trsvcid": "4420" 00:16:15.648 } 00:16:15.648 ], 00:16:15.648 "allow_any_host": true, 00:16:15.648 "hosts": [] 00:16:15.648 }, 00:16:15.648 { 00:16:15.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.648 "subtype": "NVMe", 00:16:15.648 "listen_addresses": [ 00:16:15.648 { 00:16:15.648 "trtype": "TCP", 00:16:15.648 "adrfam": "IPv4", 00:16:15.648 "traddr": "10.0.0.3", 00:16:15.648 "trsvcid": "4420" 00:16:15.648 } 00:16:15.648 ], 00:16:15.648 "allow_any_host": true, 00:16:15.648 "hosts": [], 00:16:15.648 "serial_number": "SPDK00000000000001", 00:16:15.648 "model_number": "SPDK bdev Controller", 00:16:15.648 "max_namespaces": 32, 00:16:15.648 "min_cntlid": 1, 00:16:15.648 "max_cntlid": 65519, 00:16:15.648 "namespaces": [ 00:16:15.648 { 00:16:15.648 "nsid": 1, 00:16:15.648 "bdev_name": "Malloc0", 00:16:15.648 "name": "Malloc0", 00:16:15.648 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:15.648 "eui64": "ABCDEF0123456789", 00:16:15.648 "uuid": "91c35173-9e4c-46a5-aa03-74457bca8090" 00:16:15.648 } 00:16:15.648 ] 00:16:15.648 } 00:16:15.648 ] 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.648 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:15.648 [2024-12-06 09:53:40.796353] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:15.648 [2024-12-06 09:53:40.796415] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74023 ] 00:16:15.912 [2024-12-06 09:53:40.946512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:15.912 [2024-12-06 09:53:40.946591] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:15.912 [2024-12-06 09:53:40.946598] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:15.912 [2024-12-06 09:53:40.946614] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:15.912 [2024-12-06 09:53:40.946625] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:15.912 [2024-12-06 09:53:40.946918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:15.912 [2024-12-06 09:53:40.946990] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22d7750 0 00:16:15.912 [2024-12-06 09:53:40.951641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:15.912 [2024-12-06 09:53:40.951663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:15.912 [2024-12-06 09:53:40.951684] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:15.912 [2024-12-06 09:53:40.951688] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:15.912 [2024-12-06 09:53:40.951721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.912 [2024-12-06 09:53:40.951728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.912 [2024-12-06 09:53:40.951732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.912 [2024-12-06 09:53:40.951745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:15.912 [2024-12-06 09:53:40.951777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.912 [2024-12-06 09:53:40.959644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.912 [2024-12-06 09:53:40.959662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.912 [2024-12-06 09:53:40.959667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.912 [2024-12-06 09:53:40.959687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.912 [2024-12-06 09:53:40.959701] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:15.912 [2024-12-06 09:53:40.959709] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:15.912 [2024-12-06 09:53:40.959715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:15.912 [2024-12-06 09:53:40.959731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.912 [2024-12-06 09:53:40.959736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.912 [2024-12-06 09:53:40.959740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.912 [2024-12-06 09:53:40.959748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.913 [2024-12-06 09:53:40.959773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.913 [2024-12-06 09:53:40.959833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.913 [2024-12-06 09:53:40.959840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.913 [2024-12-06 09:53:40.959843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.959847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.913 [2024-12-06 09:53:40.959852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:15.913 [2024-12-06 09:53:40.959859] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:15.913 [2024-12-06 09:53:40.959866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.959870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.959873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.913 [2024-12-06 09:53:40.959880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.913 [2024-12-06 09:53:40.959900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.913 [2024-12-06 09:53:40.959959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.913 [2024-12-06 09:53:40.959965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.913 [2024-12-06 09:53:40.959968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.959972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.913 [2024-12-06 09:53:40.959977] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:15.913 [2024-12-06 09:53:40.959985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:15.913 [2024-12-06 09:53:40.959993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.959997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.913 [2024-12-06 09:53:40.960007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.913 [2024-12-06 09:53:40.960023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.913 [2024-12-06 09:53:40.960069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.913 [2024-12-06 09:53:40.960075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.913 [2024-12-06 09:53:40.960079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.913 [2024-12-06 09:53:40.960088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:15.913 [2024-12-06 09:53:40.960097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.913 [2024-12-06 09:53:40.960112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.913 [2024-12-06 09:53:40.960128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.913 [2024-12-06 09:53:40.960172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.913 [2024-12-06 09:53:40.960178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.913 [2024-12-06 09:53:40.960181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.913 [2024-12-06 09:53:40.960190] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:15.913 [2024-12-06 09:53:40.960195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:15.913 [2024-12-06 09:53:40.960202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:15.913 [2024-12-06 09:53:40.960312] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:15.913 [2024-12-06 09:53:40.960318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:15.913 [2024-12-06 09:53:40.960327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.913 [2024-12-06 09:53:40.960341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.913 [2024-12-06 09:53:40.960359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.913 [2024-12-06 09:53:40.960408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.913 [2024-12-06 09:53:40.960414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.913 [2024-12-06 09:53:40.960418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.913 [2024-12-06 09:53:40.960427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:15.913 [2024-12-06 09:53:40.960436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.913 [2024-12-06 09:53:40.960451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.913 [2024-12-06 09:53:40.960468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.913 [2024-12-06 09:53:40.960510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.913 [2024-12-06 09:53:40.960516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.913 [2024-12-06 09:53:40.960520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.913 [2024-12-06 09:53:40.960528] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:15.913 [2024-12-06 09:53:40.960533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:15.913 [2024-12-06 09:53:40.960540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:15.913 [2024-12-06 09:53:40.960550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:15.913 [2024-12-06 09:53:40.960560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.913 [2024-12-06 09:53:40.960570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.913 [2024-12-06 09:53:40.960604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.913 [2024-12-06 09:53:40.960694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.913 [2024-12-06 09:53:40.960703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.913 [2024-12-06 09:53:40.960707] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960710] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d7750): datao=0, datal=4096, cccid=0 00:16:15.913 [2024-12-06 09:53:40.960715] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x233b740) on tqpair(0x22d7750): expected_datao=0, payload_size=4096 00:16:15.913 [2024-12-06 09:53:40.960720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960727] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960732] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.913 [2024-12-06 09:53:40.960746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.913 [2024-12-06 09:53:40.960750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.913 [2024-12-06 09:53:40.960763] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:15.913 [2024-12-06 09:53:40.960768] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:15.913 [2024-12-06 09:53:40.960772] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:15.913 [2024-12-06 09:53:40.960777] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:15.913 [2024-12-06 09:53:40.960782] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:15.913 [2024-12-06 09:53:40.960787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:15.913 [2024-12-06 09:53:40.960796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:15.913 [2024-12-06 09:53:40.960803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.913 [2024-12-06 09:53:40.960818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.913 [2024-12-06 09:53:40.960838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.913 [2024-12-06 09:53:40.960889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.913 [2024-12-06 09:53:40.960895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.913 [2024-12-06 09:53:40.960899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.913 [2024-12-06 09:53:40.960903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.914 [2024-12-06 09:53:40.960918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.960923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.960927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.960933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.914 [2024-12-06 09:53:40.960940] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.960943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.960947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.960952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.914 [2024-12-06 09:53:40.960958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.960962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.960965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.960971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.914 [2024-12-06 09:53:40.960976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.960980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.960983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.961004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.914 [2024-12-06 09:53:40.961009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:15.914 [2024-12-06 09:53:40.961017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:15.914 [2024-12-06 09:53:40.961023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.961033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.914 [2024-12-06 09:53:40.961053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b740, cid 0, qid 0 00:16:15.914 [2024-12-06 09:53:40.961059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233b8c0, cid 1, qid 0 00:16:15.914 [2024-12-06 09:53:40.961064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233ba40, cid 2, qid 0 00:16:15.914 [2024-12-06 09:53:40.961068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.914 [2024-12-06 09:53:40.961073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bd40, cid 4, qid 0 00:16:15.914 [2024-12-06 09:53:40.961179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.914 [2024-12-06 09:53:40.961186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.914 [2024-12-06 09:53:40.961189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bd40) on tqpair=0x22d7750 00:16:15.914 [2024-12-06 09:53:40.961198] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:15.914 [2024-12-06 09:53:40.961208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:15.914 [2024-12-06 09:53:40.961219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.961230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.914 [2024-12-06 09:53:40.961248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bd40, cid 4, qid 0 00:16:15.914 [2024-12-06 09:53:40.961309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.914 [2024-12-06 09:53:40.961315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.914 [2024-12-06 09:53:40.961319] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961322] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d7750): datao=0, datal=4096, cccid=4 00:16:15.914 [2024-12-06 09:53:40.961327] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x233bd40) on tqpair(0x22d7750): expected_datao=0, payload_size=4096 00:16:15.914 [2024-12-06 09:53:40.961331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961337] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961341] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.914 [2024-12-06 09:53:40.961355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.914 [2024-12-06 09:53:40.961358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bd40) on tqpair=0x22d7750 00:16:15.914 [2024-12-06 09:53:40.961374] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:15.914 [2024-12-06 09:53:40.961401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.961413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.914 [2024-12-06 09:53:40.961420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.961433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.914 [2024-12-06 09:53:40.961456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bd40, cid 4, qid 0 00:16:15.914 [2024-12-06 09:53:40.961463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bec0, cid 5, qid 0 00:16:15.914 [2024-12-06 09:53:40.961579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.914 [2024-12-06 09:53:40.961598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.914 [2024-12-06 09:53:40.961603] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961621] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d7750): datao=0, datal=1024, cccid=4 00:16:15.914 [2024-12-06 09:53:40.961626] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x233bd40) on tqpair(0x22d7750): expected_datao=0, payload_size=1024 00:16:15.914 [2024-12-06 09:53:40.961631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961637] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961641] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.914 [2024-12-06 09:53:40.961652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.914 [2024-12-06 09:53:40.961656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bec0) on tqpair=0x22d7750 00:16:15.914 [2024-12-06 09:53:40.961679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.914 [2024-12-06 09:53:40.961686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.914 [2024-12-06 09:53:40.961690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bd40) on tqpair=0x22d7750 00:16:15.914 [2024-12-06 09:53:40.961705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.961717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.914 [2024-12-06 09:53:40.961741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bd40, cid 4, qid 0 00:16:15.914 [2024-12-06 09:53:40.961810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.914 [2024-12-06 09:53:40.961816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.914 [2024-12-06 09:53:40.961819] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961823] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d7750): datao=0, datal=3072, cccid=4 00:16:15.914 [2024-12-06 09:53:40.961843] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x233bd40) on tqpair(0x22d7750): expected_datao=0, payload_size=3072 00:16:15.914 [2024-12-06 09:53:40.961848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961855] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961859] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.914 [2024-12-06 09:53:40.961873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.914 [2024-12-06 09:53:40.961877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bd40) on tqpair=0x22d7750 00:16:15.914 [2024-12-06 09:53:40.961890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.961895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d7750) 00:16:15.914 [2024-12-06 09:53:40.961902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.914 [2024-12-06 09:53:40.961924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bd40, cid 4, qid 0 00:16:15.914 [2024-12-06 09:53:40.961998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.914 [2024-12-06 09:53:40.962004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.914 [2024-12-06 09:53:40.962008] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.914 [2024-12-06 09:53:40.962012] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d7750): datao=0, datal=8, cccid=4 00:16:15.914 [2024-12-06 09:53:40.962016] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x233bd40) on tqpair(0x22d7750): expected_datao=0, payload_size=8 00:16:15.914 [2024-12-06 09:53:40.962020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.914 ===================================================== 00:16:15.914 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:15.914 ===================================================== 00:16:15.914 Controller Capabilities/Features 00:16:15.914 ================================ 00:16:15.915 Vendor ID: 0000 00:16:15.915 Subsystem Vendor ID: 0000 00:16:15.915 Serial Number: .................... 00:16:15.915 Model Number: ........................................ 00:16:15.915 Firmware Version: 25.01 00:16:15.915 Recommended Arb Burst: 0 00:16:15.915 IEEE OUI Identifier: 00 00 00 00:16:15.915 Multi-path I/O 00:16:15.915 May have multiple subsystem ports: No 00:16:15.915 May have multiple controllers: No 00:16:15.915 Associated with SR-IOV VF: No 00:16:15.915 Max Data Transfer Size: 131072 00:16:15.915 Max Number of Namespaces: 0 00:16:15.915 Max Number of I/O Queues: 1024 00:16:15.915 NVMe Specification Version (VS): 1.3 00:16:15.915 NVMe Specification Version (Identify): 1.3 00:16:15.915 Maximum Queue Entries: 128 00:16:15.915 Contiguous Queues Required: Yes 00:16:15.915 Arbitration Mechanisms Supported 00:16:15.915 Weighted Round Robin: Not Supported 00:16:15.915 Vendor Specific: Not Supported 00:16:15.915 Reset Timeout: 15000 ms 00:16:15.915 Doorbell Stride: 4 bytes 00:16:15.915 NVM Subsystem Reset: Not Supported 00:16:15.915 Command Sets Supported 00:16:15.915 NVM Command Set: Supported 00:16:15.915 Boot Partition: Not Supported 00:16:15.915 Memory Page Size Minimum: 4096 bytes 00:16:15.915 Memory Page Size Maximum: 4096 bytes 00:16:15.915 Persistent Memory Region: Not Supported 00:16:15.915 Optional Asynchronous Events Supported 00:16:15.915 Namespace Attribute Notices: Not Supported 00:16:15.915 Firmware Activation Notices: Not Supported 00:16:15.915 ANA Change Notices: Not Supported 00:16:15.915 PLE Aggregate Log Change Notices: Not Supported 00:16:15.915 LBA Status Info Alert Notices: Not Supported 00:16:15.915 EGE Aggregate Log Change Notices: Not Supported 00:16:15.915 Normal NVM Subsystem Shutdown event: Not Supported 00:16:15.915 Zone Descriptor Change Notices: Not Supported 00:16:15.915 Discovery Log Change Notices: Supported 00:16:15.915 Controller Attributes 00:16:15.915 128-bit Host Identifier: Not Supported 00:16:15.915 Non-Operational Permissive Mode: Not Supported 00:16:15.915 NVM Sets: Not Supported 00:16:15.915 Read Recovery Levels: Not Supported 00:16:15.915 Endurance Groups: Not Supported 00:16:15.915 Predictable Latency Mode: Not Supported 00:16:15.915 Traffic Based Keep ALive: Not Supported 00:16:15.915 Namespace Granularity: Not Supported 00:16:15.915 SQ Associations: Not Supported 00:16:15.915 UUID List: Not Supported 00:16:15.915 Multi-Domain Subsystem: Not Supported 00:16:15.915 Fixed Capacity Management: Not Supported 00:16:15.915 Variable Capacity Management: Not Supported 00:16:15.915 Delete Endurance Group: Not Supported 00:16:15.915 Delete NVM Set: Not Supported 00:16:15.915 Extended LBA Formats Supported: Not Supported 00:16:15.915 Flexible Data Placement Supported: Not Supported 00:16:15.915 00:16:15.915 Controller Memory Buffer Support 00:16:15.915 ================================ 00:16:15.915 Supported: No 00:16:15.915 00:16:15.915 Persistent Memory Region Support 00:16:15.915 ================================ 00:16:15.915 Supported: No 00:16:15.915 00:16:15.915 Admin Command Set Attributes 00:16:15.915 ============================ 00:16:15.915 Security Send/Receive: Not Supported 00:16:15.915 Format NVM: Not Supported 00:16:15.915 Firmware Activate/Download: Not Supported 00:16:15.915 Namespace Management: Not Supported 00:16:15.915 Device Self-Test: Not Supported 00:16:15.915 Directives: Not Supported 00:16:15.915 NVMe-MI: Not Supported 00:16:15.915 Virtualization Management: Not Supported 00:16:15.915 Doorbell Buffer Config: Not Supported 00:16:15.915 Get LBA Status Capability: Not Supported 00:16:15.915 Command & Feature Lockdown Capability: Not Supported 00:16:15.915 Abort Command Limit: 1 00:16:15.915 Async Event Request Limit: 4 00:16:15.915 Number of Firmware Slots: N/A 00:16:15.915 Firmware Slot 1 Read-Only: N/A 00:16:15.915 Firmware Activation Without Reset: N/A 00:16:15.915 Multiple Update Detection Support: N/A 00:16:15.915 Firmware Update Granularity: No Information Provided 00:16:15.915 Per-Namespace SMART Log: No 00:16:15.915 Asymmetric Namespace Access Log Page: Not Supported 00:16:15.915 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:15.915 Command Effects Log Page: Not Supported 00:16:15.915 Get Log Page Extended Data: Supported 00:16:15.915 Telemetry Log Pages: Not Supported 00:16:15.915 Persistent Event Log Pages: Not Supported 00:16:15.915 Supported Log Pages Log Page: May Support 00:16:15.915 Commands Supported & Effects Log Page: Not Supported 00:16:15.915 Feature Identifiers & Effects Log Page:May Support 00:16:15.915 NVMe-MI Commands & Effects Log Page: May Support 00:16:15.915 Data Area 4 for Telemetry Log: Not Supported 00:16:15.915 Error Log Page Entries Supported: 128 00:16:15.915 Keep Alive: Not Supported 00:16:15.915 00:16:15.915 NVM Command Set Attributes 00:16:15.915 ========================== 00:16:15.915 Submission Queue Entry Size 00:16:15.915 Max: 1 00:16:15.915 Min: 1 00:16:15.915 Completion Queue Entry Size 00:16:15.915 Max: 1 00:16:15.915 Min: 1 00:16:15.915 Number of Namespaces: 0 00:16:15.915 Compare Command: Not Supported 00:16:15.915 Write Uncorrectable Command: Not Supported 00:16:15.915 Dataset Management Command: Not Supported 00:16:15.915 Write Zeroes Command: Not Supported 00:16:15.915 Set Features Save Field: Not Supported 00:16:15.915 Reservations: Not Supported 00:16:15.915 Timestamp: Not Supported 00:16:15.915 Copy: Not Supported 00:16:15.915 Volatile Write Cache: Not Present 00:16:15.915 Atomic Write Unit (Normal): 1 00:16:15.915 Atomic Write Unit (PFail): 1 00:16:15.915 Atomic Compare & Write Unit: 1 00:16:15.915 Fused Compare & Write: Supported 00:16:15.915 Scatter-Gather List 00:16:15.915 SGL Command Set: Supported 00:16:15.915 SGL Keyed: Supported 00:16:15.915 SGL Bit Bucket Descriptor: Not Supported 00:16:15.915 SGL Metadata Pointer: Not Supported 00:16:15.915 Oversized SGL: Not Supported 00:16:15.915 SGL Metadata Address: Not Supported 00:16:15.915 SGL Offset: Supported 00:16:15.915 Transport SGL Data Block: Not Supported 00:16:15.915 Replay Protected Memory Block: Not Supported 00:16:15.915 00:16:15.915 Firmware Slot Information 00:16:15.915 ========================= 00:16:15.915 Active slot: 0 00:16:15.915 00:16:15.915 00:16:15.915 Error Log 00:16:15.915 ========= 00:16:15.915 00:16:15.915 Active Namespaces 00:16:15.915 ================= 00:16:15.915 Discovery Log Page 00:16:15.915 ================== 00:16:15.915 Generation Counter: 2 00:16:15.915 Number of Records: 2 00:16:15.915 Record Format: 0 00:16:15.915 00:16:15.915 Discovery Log Entry 0 00:16:15.915 ---------------------- 00:16:15.915 Transport Type: 3 (TCP) 00:16:15.915 Address Family: 1 (IPv4) 00:16:15.915 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:15.915 Entry Flags: 00:16:15.915 Duplicate Returned Information: 1 00:16:15.915 Explicit Persistent Connection Support for Discovery: 1 00:16:15.915 Transport Requirements: 00:16:15.915 Secure Channel: Not Required 00:16:15.915 Port ID: 0 (0x0000) 00:16:15.915 Controller ID: 65535 (0xffff) 00:16:15.915 Admin Max SQ Size: 128 00:16:15.915 Transport Service Identifier: 4420 00:16:15.915 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:15.915 Transport Address: 10.0.0.3 00:16:15.915 Discovery Log Entry 1 00:16:15.915 ---------------------- 00:16:15.915 Transport Type: 3 (TCP) 00:16:15.915 Address Family: 1 (IPv4) 00:16:15.915 Subsystem Type: 2 (NVM Subsystem) 00:16:15.915 Entry Flags: 00:16:15.915 Duplicate Returned Information: 0 00:16:15.915 Explicit Persistent Connection Support for Discovery: 0 00:16:15.915 Transport Requirements: 00:16:15.915 Secure Channel: Not Required 00:16:15.915 Port ID: 0 (0x0000) 00:16:15.915 Controller ID: 65535 (0xffff) 00:16:15.915 Admin Max SQ Size: 128 00:16:15.915 Transport Service Identifier: 4420 00:16:15.915 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:15.915 Transport Address: 10.0.0.3 [2024-12-06 09:53:40.962026] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.915 [2024-12-06 09:53:40.962030] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.915 [2024-12-06 09:53:40.962045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.915 [2024-12-06 09:53:40.962052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.915 [2024-12-06 09:53:40.962056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.915 [2024-12-06 09:53:40.962059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bd40) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962148] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:15.916 [2024-12-06 09:53:40.962160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b740) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.916 [2024-12-06 09:53:40.962172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233b8c0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.916 [2024-12-06 09:53:40.962181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233ba40) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.916 [2024-12-06 09:53:40.962190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.916 [2024-12-06 09:53:40.962203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.962218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.962239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.962285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.962291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.962295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.962335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.962355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.962421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.962427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.962431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962444] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:15.916 [2024-12-06 09:53:40.962449] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:15.916 [2024-12-06 09:53:40.962459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.962473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.962490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.962531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.962537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.962541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.962569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.962585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.962645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.962653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.962656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.962685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.962704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.962748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.962755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.962758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.962785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.962802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.962847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.962853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.962856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.962884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.962900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.962945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.962951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.962955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.962968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.962975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.962982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.962998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.963066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.963072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.963075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.963089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.963151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.963169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.963222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.963228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.963233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.963246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.963261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.916 [2024-12-06 09:53:40.963278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.916 [2024-12-06 09:53:40.963336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.916 [2024-12-06 09:53:40.963342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.916 [2024-12-06 09:53:40.963346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.916 [2024-12-06 09:53:40.963359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.916 [2024-12-06 09:53:40.963367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.916 [2024-12-06 09:53:40.963374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.917 [2024-12-06 09:53:40.963390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.917 [2024-12-06 09:53:40.963445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.917 [2024-12-06 09:53:40.963451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.917 [2024-12-06 09:53:40.963455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:40.963458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.917 [2024-12-06 09:53:40.963468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:40.963472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:40.963476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.917 [2024-12-06 09:53:40.963482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.917 [2024-12-06 09:53:40.963498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.917 [2024-12-06 09:53:40.963563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.917 [2024-12-06 09:53:40.963569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.917 [2024-12-06 09:53:40.963573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:40.963576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.917 [2024-12-06 09:53:40.966652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:40.966668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:40.966672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d7750) 00:16:15.917 [2024-12-06 09:53:40.966696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.917 [2024-12-06 09:53:40.966720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x233bbc0, cid 3, qid 0 00:16:15.917 [2024-12-06 09:53:40.966776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.917 [2024-12-06 09:53:40.966783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.917 [2024-12-06 09:53:40.966786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:40.966790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x233bbc0) on tqpair=0x22d7750 00:16:15.917 [2024-12-06 09:53:40.966799] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:16:15.917 00:16:15.917 09:53:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:15.917 [2024-12-06 09:53:41.000124] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:15.917 [2024-12-06 09:53:41.000171] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74025 ] 00:16:15.917 [2024-12-06 09:53:41.155875] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:15.917 [2024-12-06 09:53:41.155966] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:15.917 [2024-12-06 09:53:41.155973] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:15.917 [2024-12-06 09:53:41.155989] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:15.917 [2024-12-06 09:53:41.156000] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:15.917 [2024-12-06 09:53:41.156390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:15.917 [2024-12-06 09:53:41.156467] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x74a750 0 00:16:15.917 [2024-12-06 09:53:41.163679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:15.917 [2024-12-06 09:53:41.163702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:15.917 [2024-12-06 09:53:41.163724] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:15.917 [2024-12-06 09:53:41.163728] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:15.917 [2024-12-06 09:53:41.163762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.163769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.163774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.917 [2024-12-06 09:53:41.163789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:15.917 [2024-12-06 09:53:41.163820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.917 [2024-12-06 09:53:41.171655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.917 [2024-12-06 09:53:41.171677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.917 [2024-12-06 09:53:41.171698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.171704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.917 [2024-12-06 09:53:41.171716] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:15.917 [2024-12-06 09:53:41.171726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:15.917 [2024-12-06 09:53:41.171733] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:15.917 [2024-12-06 09:53:41.171754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.171760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.171764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.917 [2024-12-06 09:53:41.171776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.917 [2024-12-06 09:53:41.171804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.917 [2024-12-06 09:53:41.171936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.917 [2024-12-06 09:53:41.171943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.917 [2024-12-06 09:53:41.171961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.171965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.917 [2024-12-06 09:53:41.171971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:15.917 [2024-12-06 09:53:41.171978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:15.917 [2024-12-06 09:53:41.171985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.171989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.171993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.917 [2024-12-06 09:53:41.172000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.917 [2024-12-06 09:53:41.172017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.917 [2024-12-06 09:53:41.172072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.917 [2024-12-06 09:53:41.172078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.917 [2024-12-06 09:53:41.172082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.172086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.917 [2024-12-06 09:53:41.172091] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:15.917 [2024-12-06 09:53:41.172104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:15.917 [2024-12-06 09:53:41.172111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.172116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.172119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.917 [2024-12-06 09:53:41.172126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.917 [2024-12-06 09:53:41.172142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.917 [2024-12-06 09:53:41.172197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.917 [2024-12-06 09:53:41.172204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.917 [2024-12-06 09:53:41.172207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.172211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.917 [2024-12-06 09:53:41.172216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:15.917 [2024-12-06 09:53:41.172226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.917 [2024-12-06 09:53:41.172230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.172240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-12-06 09:53:41.172256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.918 [2024-12-06 09:53:41.172305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.918 [2024-12-06 09:53:41.172312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.918 [2024-12-06 09:53:41.172315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.918 [2024-12-06 09:53:41.172324] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:15.918 [2024-12-06 09:53:41.172329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:15.918 [2024-12-06 09:53:41.172337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:15.918 [2024-12-06 09:53:41.172448] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:15.918 [2024-12-06 09:53:41.172453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:15.918 [2024-12-06 09:53:41.172462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.172476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-12-06 09:53:41.172494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.918 [2024-12-06 09:53:41.172543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.918 [2024-12-06 09:53:41.172549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.918 [2024-12-06 09:53:41.172553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.918 [2024-12-06 09:53:41.172561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:15.918 [2024-12-06 09:53:41.172571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.172616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-12-06 09:53:41.172664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.918 [2024-12-06 09:53:41.172748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.918 [2024-12-06 09:53:41.172757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.918 [2024-12-06 09:53:41.172761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.918 [2024-12-06 09:53:41.172770] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:15.918 [2024-12-06 09:53:41.172775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:15.918 [2024-12-06 09:53:41.172784] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:15.918 [2024-12-06 09:53:41.172795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:15.918 [2024-12-06 09:53:41.172807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.172811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.172843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-12-06 09:53:41.172866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.918 [2024-12-06 09:53:41.173030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.918 [2024-12-06 09:53:41.173038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.918 [2024-12-06 09:53:41.173042] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173045] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74a750): datao=0, datal=4096, cccid=0 00:16:15.918 [2024-12-06 09:53:41.173050] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7ae740) on tqpair(0x74a750): expected_datao=0, payload_size=4096 00:16:15.918 [2024-12-06 09:53:41.173055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173064] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173069] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.918 [2024-12-06 09:53:41.173082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.918 [2024-12-06 09:53:41.173086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.918 [2024-12-06 09:53:41.173099] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:15.918 [2024-12-06 09:53:41.173104] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:15.918 [2024-12-06 09:53:41.173108] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:15.918 [2024-12-06 09:53:41.173121] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:15.918 [2024-12-06 09:53:41.173125] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:15.918 [2024-12-06 09:53:41.173130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:15.918 [2024-12-06 09:53:41.173139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:15.918 [2024-12-06 09:53:41.173146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.173162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.918 [2024-12-06 09:53:41.173180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.918 [2024-12-06 09:53:41.173235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.918 [2024-12-06 09:53:41.173241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.918 [2024-12-06 09:53:41.173245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:15.918 [2024-12-06 09:53:41.173261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.173276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.918 [2024-12-06 09:53:41.173283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.173296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.918 [2024-12-06 09:53:41.173302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.173315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.918 [2024-12-06 09:53:41.173321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.173334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.918 [2024-12-06 09:53:41.173339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:15.918 [2024-12-06 09:53:41.173347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:15.918 [2024-12-06 09:53:41.173355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74a750) 00:16:15.918 [2024-12-06 09:53:41.173365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-12-06 09:53:41.173384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae740, cid 0, qid 0 00:16:15.918 [2024-12-06 09:53:41.173391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7ae8c0, cid 1, qid 0 00:16:15.918 [2024-12-06 09:53:41.173411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aea40, cid 2, qid 0 00:16:15.918 [2024-12-06 09:53:41.173416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:15.918 [2024-12-06 09:53:41.173421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aed40, cid 4, qid 0 00:16:15.918 [2024-12-06 09:53:41.173544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.918 [2024-12-06 09:53:41.173556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.918 [2024-12-06 09:53:41.173560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.918 [2024-12-06 09:53:41.173565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aed40) on tqpair=0x74a750 00:16:15.918 [2024-12-06 09:53:41.173608] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:15.918 [2024-12-06 09:53:41.173619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.173630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.173646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.173654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.173658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.173662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74a750) 00:16:15.919 [2024-12-06 09:53:41.173670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.919 [2024-12-06 09:53:41.173691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aed40, cid 4, qid 0 00:16:15.919 [2024-12-06 09:53:41.173762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.919 [2024-12-06 09:53:41.173769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.919 [2024-12-06 09:53:41.173773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.173777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aed40) on tqpair=0x74a750 00:16:15.919 [2024-12-06 09:53:41.173845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.173857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.173867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.173871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74a750) 00:16:15.919 [2024-12-06 09:53:41.173878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.919 [2024-12-06 09:53:41.173898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aed40, cid 4, qid 0 00:16:15.919 [2024-12-06 09:53:41.173982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.919 [2024-12-06 09:53:41.173988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.919 [2024-12-06 09:53:41.173992] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.173996] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74a750): datao=0, datal=4096, cccid=4 00:16:15.919 [2024-12-06 09:53:41.174001] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7aed40) on tqpair(0x74a750): expected_datao=0, payload_size=4096 00:16:15.919 [2024-12-06 09:53:41.174020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174028] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174032] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.919 [2024-12-06 09:53:41.174046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.919 [2024-12-06 09:53:41.174049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aed40) on tqpair=0x74a750 00:16:15.919 [2024-12-06 09:53:41.174071] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:15.919 [2024-12-06 09:53:41.174102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74a750) 00:16:15.919 [2024-12-06 09:53:41.174150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.919 [2024-12-06 09:53:41.174171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aed40, cid 4, qid 0 00:16:15.919 [2024-12-06 09:53:41.174256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.919 [2024-12-06 09:53:41.174263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.919 [2024-12-06 09:53:41.174267] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174271] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74a750): datao=0, datal=4096, cccid=4 00:16:15.919 [2024-12-06 09:53:41.174276] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7aed40) on tqpair(0x74a750): expected_datao=0, payload_size=4096 00:16:15.919 [2024-12-06 09:53:41.174281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174292] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.919 [2024-12-06 09:53:41.174307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.919 [2024-12-06 09:53:41.174310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aed40) on tqpair=0x74a750 00:16:15.919 [2024-12-06 09:53:41.174332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74a750) 00:16:15.919 [2024-12-06 09:53:41.174364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.919 [2024-12-06 09:53:41.174385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aed40, cid 4, qid 0 00:16:15.919 [2024-12-06 09:53:41.174476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.919 [2024-12-06 09:53:41.174483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.919 [2024-12-06 09:53:41.174487] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174490] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74a750): datao=0, datal=4096, cccid=4 00:16:15.919 [2024-12-06 09:53:41.174495] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7aed40) on tqpair(0x74a750): expected_datao=0, payload_size=4096 00:16:15.919 [2024-12-06 09:53:41.174499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174506] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174510] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.919 [2024-12-06 09:53:41.174524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.919 [2024-12-06 09:53:41.174527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aed40) on tqpair=0x74a750 00:16:15.919 [2024-12-06 09:53:41.174555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174611] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:15.919 [2024-12-06 09:53:41.174616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:15.919 [2024-12-06 09:53:41.174633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:15.919 [2024-12-06 09:53:41.174652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74a750) 00:16:15.919 [2024-12-06 09:53:41.174664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.919 [2024-12-06 09:53:41.174672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74a750) 00:16:15.919 [2024-12-06 09:53:41.174711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.919 [2024-12-06 09:53:41.174739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aed40, cid 4, qid 0 00:16:15.919 [2024-12-06 09:53:41.174747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aeec0, cid 5, qid 0 00:16:15.919 [2024-12-06 09:53:41.174827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.919 [2024-12-06 09:53:41.174834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.919 [2024-12-06 09:53:41.174838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aed40) on tqpair=0x74a750 00:16:15.919 [2024-12-06 09:53:41.174849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.919 [2024-12-06 09:53:41.174855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.919 [2024-12-06 09:53:41.174859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aeec0) on tqpair=0x74a750 00:16:15.919 [2024-12-06 09:53:41.174874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.919 [2024-12-06 09:53:41.174879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74a750) 00:16:15.919 [2024-12-06 09:53:41.174886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.919 [2024-12-06 09:53:41.174904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aeec0, cid 5, qid 0 00:16:15.919 [2024-12-06 09:53:41.174975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.919 [2024-12-06 09:53:41.174981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.919 [2024-12-06 09:53:41.174996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aeec0) on tqpair=0x74a750 00:16:15.920 [2024-12-06 09:53:41.175010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74a750) 00:16:15.920 [2024-12-06 09:53:41.175021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.920 [2024-12-06 09:53:41.175037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aeec0, cid 5, qid 0 00:16:15.920 [2024-12-06 09:53:41.175146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.920 [2024-12-06 09:53:41.175154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.920 [2024-12-06 09:53:41.175158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aeec0) on tqpair=0x74a750 00:16:15.920 [2024-12-06 09:53:41.175173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74a750) 00:16:15.920 [2024-12-06 09:53:41.175185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.920 [2024-12-06 09:53:41.175203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aeec0, cid 5, qid 0 00:16:15.920 [2024-12-06 09:53:41.175262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.920 [2024-12-06 09:53:41.175269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.920 [2024-12-06 09:53:41.175273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aeec0) on tqpair=0x74a750 00:16:15.920 [2024-12-06 09:53:41.175297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74a750) 00:16:15.920 [2024-12-06 09:53:41.175310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.920 [2024-12-06 09:53:41.175318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74a750) 00:16:15.920 [2024-12-06 09:53:41.175329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.920 [2024-12-06 09:53:41.175337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x74a750) 00:16:15.920 [2024-12-06 09:53:41.175348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.920 [2024-12-06 09:53:41.175360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.920 [2024-12-06 09:53:41.175364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x74a750) 00:16:15.920 [2024-12-06 09:53:41.175371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.920 [2024-12-06 09:53:41.175391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aeec0, cid 5, qid 0 00:16:15.920 [2024-12-06 09:53:41.175398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aed40, cid 4, qid 0 00:16:15.920 [2024-12-06 09:53:41.175404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af040, cid 6, qid 0 00:16:15.920 [2024-12-06 09:53:41.175409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af1c0, cid 7, qid 0 00:16:16.182 [2024-12-06 09:53:41.179646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:16.182 [2024-12-06 09:53:41.179668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:16.182 [2024-12-06 09:53:41.179673] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179677] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74a750): datao=0, datal=8192, cccid=5 00:16:16.182 [2024-12-06 09:53:41.179683] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7aeec0) on tqpair(0x74a750): expected_datao=0, payload_size=8192 00:16:16.182 [2024-12-06 09:53:41.179688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179711] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179717] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:16.182 [2024-12-06 09:53:41.179730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:16.182 [2024-12-06 09:53:41.179733] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179737] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74a750): datao=0, datal=512, cccid=4 00:16:16.182 [2024-12-06 09:53:41.179742] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7aed40) on tqpair(0x74a750): expected_datao=0, payload_size=512 00:16:16.182 [2024-12-06 09:53:41.179747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179753] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179757] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:16.182 [2024-12-06 09:53:41.179769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:16.182 [2024-12-06 09:53:41.179773] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179776] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74a750): datao=0, datal=512, cccid=6 00:16:16.182 [2024-12-06 09:53:41.179781] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7af040) on tqpair(0x74a750): expected_datao=0, payload_size=512 00:16:16.182 [2024-12-06 09:53:41.179785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179792] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179796] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:16.182 [2024-12-06 09:53:41.179808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:16.182 [2024-12-06 09:53:41.179811] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179815] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74a750): datao=0, datal=4096, cccid=7 00:16:16.182 [2024-12-06 09:53:41.179819] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7af1c0) on tqpair(0x74a750): expected_datao=0, payload_size=4096 00:16:16.182 [2024-12-06 09:53:41.179824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179831] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179836] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.182 [2024-12-06 09:53:41.179848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.182 [2024-12-06 09:53:41.179851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aeec0) on tqpair=0x74a750 00:16:16.182 [2024-12-06 09:53:41.179874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.182 [2024-12-06 09:53:41.179881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.182 [2024-12-06 09:53:41.179885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aed40) on tqpair=0x74a750 00:16:16.182 [2024-12-06 09:53:41.179902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.182 [2024-12-06 09:53:41.179908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.182 [2024-12-06 09:53:41.179912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af040) on tqpair=0x74a750 00:16:16.182 [2024-12-06 09:53:41.179924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.182 [2024-12-06 09:53:41.179930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.182 [2024-12-06 09:53:41.179933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.182 [2024-12-06 09:53:41.179937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af1c0) on tqpair=0x74a750 00:16:16.182 ===================================================== 00:16:16.182 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:16.182 ===================================================== 00:16:16.182 Controller Capabilities/Features 00:16:16.182 ================================ 00:16:16.182 Vendor ID: 8086 00:16:16.182 Subsystem Vendor ID: 8086 00:16:16.182 Serial Number: SPDK00000000000001 00:16:16.182 Model Number: SPDK bdev Controller 00:16:16.182 Firmware Version: 25.01 00:16:16.182 Recommended Arb Burst: 6 00:16:16.182 IEEE OUI Identifier: e4 d2 5c 00:16:16.182 Multi-path I/O 00:16:16.182 May have multiple subsystem ports: Yes 00:16:16.182 May have multiple controllers: Yes 00:16:16.182 Associated with SR-IOV VF: No 00:16:16.182 Max Data Transfer Size: 131072 00:16:16.182 Max Number of Namespaces: 32 00:16:16.182 Max Number of I/O Queues: 127 00:16:16.182 NVMe Specification Version (VS): 1.3 00:16:16.182 NVMe Specification Version (Identify): 1.3 00:16:16.182 Maximum Queue Entries: 128 00:16:16.182 Contiguous Queues Required: Yes 00:16:16.182 Arbitration Mechanisms Supported 00:16:16.182 Weighted Round Robin: Not Supported 00:16:16.182 Vendor Specific: Not Supported 00:16:16.182 Reset Timeout: 15000 ms 00:16:16.182 Doorbell Stride: 4 bytes 00:16:16.182 NVM Subsystem Reset: Not Supported 00:16:16.182 Command Sets Supported 00:16:16.182 NVM Command Set: Supported 00:16:16.182 Boot Partition: Not Supported 00:16:16.182 Memory Page Size Minimum: 4096 bytes 00:16:16.182 Memory Page Size Maximum: 4096 bytes 00:16:16.182 Persistent Memory Region: Not Supported 00:16:16.182 Optional Asynchronous Events Supported 00:16:16.182 Namespace Attribute Notices: Supported 00:16:16.182 Firmware Activation Notices: Not Supported 00:16:16.182 ANA Change Notices: Not Supported 00:16:16.182 PLE Aggregate Log Change Notices: Not Supported 00:16:16.182 LBA Status Info Alert Notices: Not Supported 00:16:16.182 EGE Aggregate Log Change Notices: Not Supported 00:16:16.182 Normal NVM Subsystem Shutdown event: Not Supported 00:16:16.182 Zone Descriptor Change Notices: Not Supported 00:16:16.182 Discovery Log Change Notices: Not Supported 00:16:16.182 Controller Attributes 00:16:16.182 128-bit Host Identifier: Supported 00:16:16.182 Non-Operational Permissive Mode: Not Supported 00:16:16.182 NVM Sets: Not Supported 00:16:16.182 Read Recovery Levels: Not Supported 00:16:16.182 Endurance Groups: Not Supported 00:16:16.182 Predictable Latency Mode: Not Supported 00:16:16.182 Traffic Based Keep ALive: Not Supported 00:16:16.182 Namespace Granularity: Not Supported 00:16:16.182 SQ Associations: Not Supported 00:16:16.182 UUID List: Not Supported 00:16:16.182 Multi-Domain Subsystem: Not Supported 00:16:16.182 Fixed Capacity Management: Not Supported 00:16:16.182 Variable Capacity Management: Not Supported 00:16:16.182 Delete Endurance Group: Not Supported 00:16:16.182 Delete NVM Set: Not Supported 00:16:16.182 Extended LBA Formats Supported: Not Supported 00:16:16.182 Flexible Data Placement Supported: Not Supported 00:16:16.182 00:16:16.182 Controller Memory Buffer Support 00:16:16.182 ================================ 00:16:16.182 Supported: No 00:16:16.182 00:16:16.182 Persistent Memory Region Support 00:16:16.182 ================================ 00:16:16.182 Supported: No 00:16:16.182 00:16:16.182 Admin Command Set Attributes 00:16:16.182 ============================ 00:16:16.182 Security Send/Receive: Not Supported 00:16:16.182 Format NVM: Not Supported 00:16:16.182 Firmware Activate/Download: Not Supported 00:16:16.182 Namespace Management: Not Supported 00:16:16.182 Device Self-Test: Not Supported 00:16:16.182 Directives: Not Supported 00:16:16.182 NVMe-MI: Not Supported 00:16:16.182 Virtualization Management: Not Supported 00:16:16.182 Doorbell Buffer Config: Not Supported 00:16:16.182 Get LBA Status Capability: Not Supported 00:16:16.182 Command & Feature Lockdown Capability: Not Supported 00:16:16.182 Abort Command Limit: 4 00:16:16.182 Async Event Request Limit: 4 00:16:16.182 Number of Firmware Slots: N/A 00:16:16.182 Firmware Slot 1 Read-Only: N/A 00:16:16.182 Firmware Activation Without Reset: N/A 00:16:16.182 Multiple Update Detection Support: N/A 00:16:16.182 Firmware Update Granularity: No Information Provided 00:16:16.182 Per-Namespace SMART Log: No 00:16:16.182 Asymmetric Namespace Access Log Page: Not Supported 00:16:16.182 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:16.182 Command Effects Log Page: Supported 00:16:16.182 Get Log Page Extended Data: Supported 00:16:16.182 Telemetry Log Pages: Not Supported 00:16:16.183 Persistent Event Log Pages: Not Supported 00:16:16.183 Supported Log Pages Log Page: May Support 00:16:16.183 Commands Supported & Effects Log Page: Not Supported 00:16:16.183 Feature Identifiers & Effects Log Page:May Support 00:16:16.183 NVMe-MI Commands & Effects Log Page: May Support 00:16:16.183 Data Area 4 for Telemetry Log: Not Supported 00:16:16.183 Error Log Page Entries Supported: 128 00:16:16.183 Keep Alive: Supported 00:16:16.183 Keep Alive Granularity: 10000 ms 00:16:16.183 00:16:16.183 NVM Command Set Attributes 00:16:16.183 ========================== 00:16:16.183 Submission Queue Entry Size 00:16:16.183 Max: 64 00:16:16.183 Min: 64 00:16:16.183 Completion Queue Entry Size 00:16:16.183 Max: 16 00:16:16.183 Min: 16 00:16:16.183 Number of Namespaces: 32 00:16:16.183 Compare Command: Supported 00:16:16.183 Write Uncorrectable Command: Not Supported 00:16:16.183 Dataset Management Command: Supported 00:16:16.183 Write Zeroes Command: Supported 00:16:16.183 Set Features Save Field: Not Supported 00:16:16.183 Reservations: Supported 00:16:16.183 Timestamp: Not Supported 00:16:16.183 Copy: Supported 00:16:16.183 Volatile Write Cache: Present 00:16:16.183 Atomic Write Unit (Normal): 1 00:16:16.183 Atomic Write Unit (PFail): 1 00:16:16.183 Atomic Compare & Write Unit: 1 00:16:16.183 Fused Compare & Write: Supported 00:16:16.183 Scatter-Gather List 00:16:16.183 SGL Command Set: Supported 00:16:16.183 SGL Keyed: Supported 00:16:16.183 SGL Bit Bucket Descriptor: Not Supported 00:16:16.183 SGL Metadata Pointer: Not Supported 00:16:16.183 Oversized SGL: Not Supported 00:16:16.183 SGL Metadata Address: Not Supported 00:16:16.183 SGL Offset: Supported 00:16:16.183 Transport SGL Data Block: Not Supported 00:16:16.183 Replay Protected Memory Block: Not Supported 00:16:16.183 00:16:16.183 Firmware Slot Information 00:16:16.183 ========================= 00:16:16.183 Active slot: 1 00:16:16.183 Slot 1 Firmware Revision: 25.01 00:16:16.183 00:16:16.183 00:16:16.183 Commands Supported and Effects 00:16:16.183 ============================== 00:16:16.183 Admin Commands 00:16:16.183 -------------- 00:16:16.183 Get Log Page (02h): Supported 00:16:16.183 Identify (06h): Supported 00:16:16.183 Abort (08h): Supported 00:16:16.183 Set Features (09h): Supported 00:16:16.183 Get Features (0Ah): Supported 00:16:16.183 Asynchronous Event Request (0Ch): Supported 00:16:16.183 Keep Alive (18h): Supported 00:16:16.183 I/O Commands 00:16:16.183 ------------ 00:16:16.183 Flush (00h): Supported LBA-Change 00:16:16.183 Write (01h): Supported LBA-Change 00:16:16.183 Read (02h): Supported 00:16:16.183 Compare (05h): Supported 00:16:16.183 Write Zeroes (08h): Supported LBA-Change 00:16:16.183 Dataset Management (09h): Supported LBA-Change 00:16:16.183 Copy (19h): Supported LBA-Change 00:16:16.183 00:16:16.183 Error Log 00:16:16.183 ========= 00:16:16.183 00:16:16.183 Arbitration 00:16:16.183 =========== 00:16:16.183 Arbitration Burst: 1 00:16:16.183 00:16:16.183 Power Management 00:16:16.183 ================ 00:16:16.183 Number of Power States: 1 00:16:16.183 Current Power State: Power State #0 00:16:16.183 Power State #0: 00:16:16.183 Max Power: 0.00 W 00:16:16.183 Non-Operational State: Operational 00:16:16.183 Entry Latency: Not Reported 00:16:16.183 Exit Latency: Not Reported 00:16:16.183 Relative Read Throughput: 0 00:16:16.183 Relative Read Latency: 0 00:16:16.183 Relative Write Throughput: 0 00:16:16.183 Relative Write Latency: 0 00:16:16.183 Idle Power: Not Reported 00:16:16.183 Active Power: Not Reported 00:16:16.183 Non-Operational Permissive Mode: Not Supported 00:16:16.183 00:16:16.183 Health Information 00:16:16.183 ================== 00:16:16.183 Critical Warnings: 00:16:16.183 Available Spare Space: OK 00:16:16.183 Temperature: OK 00:16:16.183 Device Reliability: OK 00:16:16.183 Read Only: No 00:16:16.183 Volatile Memory Backup: OK 00:16:16.183 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:16.183 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:16.183 Available Spare: 0% 00:16:16.183 Available Spare Threshold: 0% 00:16:16.183 Life Percentage Used:[2024-12-06 09:53:41.180053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x74a750) 00:16:16.183 [2024-12-06 09:53:41.180070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.183 [2024-12-06 09:53:41.180097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af1c0, cid 7, qid 0 00:16:16.183 [2024-12-06 09:53:41.180159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.183 [2024-12-06 09:53:41.180167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.183 [2024-12-06 09:53:41.180170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af1c0) on tqpair=0x74a750 00:16:16.183 [2024-12-06 09:53:41.180215] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:16.183 [2024-12-06 09:53:41.180227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae740) on tqpair=0x74a750 00:16:16.183 [2024-12-06 09:53:41.180235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.183 [2024-12-06 09:53:41.180241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7ae8c0) on tqpair=0x74a750 00:16:16.183 [2024-12-06 09:53:41.180246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.183 [2024-12-06 09:53:41.180251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aea40) on tqpair=0x74a750 00:16:16.183 [2024-12-06 09:53:41.180256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.183 [2024-12-06 09:53:41.180261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.183 [2024-12-06 09:53:41.180266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.183 [2024-12-06 09:53:41.180276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.183 [2024-12-06 09:53:41.180292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.183 [2024-12-06 09:53:41.180315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.183 [2024-12-06 09:53:41.180370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.183 [2024-12-06 09:53:41.180379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.183 [2024-12-06 09:53:41.180383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.183 [2024-12-06 09:53:41.180396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.183 [2024-12-06 09:53:41.180412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.183 [2024-12-06 09:53:41.180459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.183 [2024-12-06 09:53:41.180591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.183 [2024-12-06 09:53:41.180597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.183 [2024-12-06 09:53:41.180616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.183 [2024-12-06 09:53:41.180636] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:16.183 [2024-12-06 09:53:41.180641] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:16.183 [2024-12-06 09:53:41.180651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.183 [2024-12-06 09:53:41.180683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.183 [2024-12-06 09:53:41.180702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.183 [2024-12-06 09:53:41.180762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.183 [2024-12-06 09:53:41.180769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.183 [2024-12-06 09:53:41.180773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.183 [2024-12-06 09:53:41.180789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.183 [2024-12-06 09:53:41.180798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.183 [2024-12-06 09:53:41.180805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.183 [2024-12-06 09:53:41.180822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.183 [2024-12-06 09:53:41.180883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.180890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.180894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.180898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.180909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.180913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.180917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.180925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.180956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.181037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.181044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.181047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.181061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.181075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.181091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.181205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.181212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.181215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.181229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.181244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.181260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.181325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.181332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.181335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.181349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.181363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.181379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.181432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.181438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.181441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.181455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.181470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.181485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.181539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.181545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.181548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.181562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.181593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.181624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.181692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.181700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.181704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.181719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.181735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.181754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.181804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.181816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.181820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.181835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.181851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.181869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.181951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.181972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.181976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.181999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.182009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.182023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.182038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.182089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.182096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.182099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.182113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.182127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.182142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.182188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.182194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.182198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.182211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.182225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.184 [2024-12-06 09:53:41.182241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.184 [2024-12-06 09:53:41.182297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.184 [2024-12-06 09:53:41.182303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.184 [2024-12-06 09:53:41.182307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.184 [2024-12-06 09:53:41.182320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.184 [2024-12-06 09:53:41.182327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.184 [2024-12-06 09:53:41.182334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.182349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.182403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.182409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.182412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.182426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.182440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.182455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.182508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.182515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.182518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.182531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.182546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.182561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.182658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.182667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.182670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.182686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.182702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.182720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.182777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.182784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.182788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.182803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.182818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.182835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.182912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.182927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.182932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.182956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.182965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.182972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.183016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.183073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.183085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.183097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.183130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.183156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.183175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.183228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.183239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.183244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.183259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.183275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.183293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.183352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.183359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.183362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.183377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.183393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.183410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.183501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.183514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.183518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.183548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.183556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.183562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.187669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.187696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.187703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.187707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.187711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.187725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.187730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.187733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74a750) 00:16:16.185 [2024-12-06 09:53:41.187741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.185 [2024-12-06 09:53:41.187798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7aebc0, cid 3, qid 0 00:16:16.185 [2024-12-06 09:53:41.187852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:16.185 [2024-12-06 09:53:41.187859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:16.185 [2024-12-06 09:53:41.187863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:16.185 [2024-12-06 09:53:41.187867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7aebc0) on tqpair=0x74a750 00:16:16.185 [2024-12-06 09:53:41.187876] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:16:16.185 0% 00:16:16.185 Data Units Read: 0 00:16:16.185 Data Units Written: 0 00:16:16.185 Host Read Commands: 0 00:16:16.185 Host Write Commands: 0 00:16:16.185 Controller Busy Time: 0 minutes 00:16:16.185 Power Cycles: 0 00:16:16.185 Power On Hours: 0 hours 00:16:16.185 Unsafe Shutdowns: 0 00:16:16.185 Unrecoverable Media Errors: 0 00:16:16.185 Lifetime Error Log Entries: 0 00:16:16.185 Warning Temperature Time: 0 minutes 00:16:16.185 Critical Temperature Time: 0 minutes 00:16:16.185 00:16:16.185 Number of Queues 00:16:16.185 ================ 00:16:16.185 Number of I/O Submission Queues: 127 00:16:16.185 Number of I/O Completion Queues: 127 00:16:16.185 00:16:16.185 Active Namespaces 00:16:16.185 ================= 00:16:16.185 Namespace ID:1 00:16:16.185 Error Recovery Timeout: Unlimited 00:16:16.185 Command Set Identifier: NVM (00h) 00:16:16.185 Deallocate: Supported 00:16:16.185 Deallocated/Unwritten Error: Not Supported 00:16:16.186 Deallocated Read Value: Unknown 00:16:16.186 Deallocate in Write Zeroes: Not Supported 00:16:16.186 Deallocated Guard Field: 0xFFFF 00:16:16.186 Flush: Supported 00:16:16.186 Reservation: Supported 00:16:16.186 Namespace Sharing Capabilities: Multiple Controllers 00:16:16.186 Size (in LBAs): 131072 (0GiB) 00:16:16.186 Capacity (in LBAs): 131072 (0GiB) 00:16:16.186 Utilization (in LBAs): 131072 (0GiB) 00:16:16.186 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:16.186 EUI64: ABCDEF0123456789 00:16:16.186 UUID: 91c35173-9e4c-46a5-aa03-74457bca8090 00:16:16.186 Thin Provisioning: Not Supported 00:16:16.186 Per-NS Atomic Units: Yes 00:16:16.186 Atomic Boundary Size (Normal): 0 00:16:16.186 Atomic Boundary Size (PFail): 0 00:16:16.186 Atomic Boundary Offset: 0 00:16:16.186 Maximum Single Source Range Length: 65535 00:16:16.186 Maximum Copy Length: 65535 00:16:16.186 Maximum Source Range Count: 1 00:16:16.186 NGUID/EUI64 Never Reused: No 00:16:16.186 Namespace Write Protected: No 00:16:16.186 Number of LBA Formats: 1 00:16:16.186 Current LBA Format: LBA Format #00 00:16:16.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:16.186 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:16.186 rmmod nvme_tcp 00:16:16.186 rmmod nvme_fabrics 00:16:16.186 rmmod nvme_keyring 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73988 ']' 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73988 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73988 ']' 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73988 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73988 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.186 killing process with pid 73988 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73988' 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73988 00:16:16.186 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73988 00:16:16.444 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:16.444 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:16.444 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:16.444 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:16:16.444 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:16:16.444 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:16.444 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:16.445 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:16:16.703 00:16:16.703 real 0m2.956s 00:16:16.703 user 0m7.338s 00:16:16.703 sys 0m0.826s 00:16:16.703 ************************************ 00:16:16.703 END TEST nvmf_identify 00:16:16.703 ************************************ 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.703 ************************************ 00:16:16.703 START TEST nvmf_perf 00:16:16.703 ************************************ 00:16:16.703 09:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:16.963 * Looking for test storage... 00:16:16.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:16.963 09:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:16.963 09:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:16.963 09:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:16.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.963 --rc genhtml_branch_coverage=1 00:16:16.963 --rc genhtml_function_coverage=1 00:16:16.963 --rc genhtml_legend=1 00:16:16.963 --rc geninfo_all_blocks=1 00:16:16.963 --rc geninfo_unexecuted_blocks=1 00:16:16.963 00:16:16.963 ' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:16.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.963 --rc genhtml_branch_coverage=1 00:16:16.963 --rc genhtml_function_coverage=1 00:16:16.963 --rc genhtml_legend=1 00:16:16.963 --rc geninfo_all_blocks=1 00:16:16.963 --rc geninfo_unexecuted_blocks=1 00:16:16.963 00:16:16.963 ' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:16.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.963 --rc genhtml_branch_coverage=1 00:16:16.963 --rc genhtml_function_coverage=1 00:16:16.963 --rc genhtml_legend=1 00:16:16.963 --rc geninfo_all_blocks=1 00:16:16.963 --rc geninfo_unexecuted_blocks=1 00:16:16.963 00:16:16.963 ' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:16.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.963 --rc genhtml_branch_coverage=1 00:16:16.963 --rc genhtml_function_coverage=1 00:16:16.963 --rc genhtml_legend=1 00:16:16.963 --rc geninfo_all_blocks=1 00:16:16.963 --rc geninfo_unexecuted_blocks=1 00:16:16.963 00:16:16.963 ' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:16.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:16.963 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:16.964 Cannot find device "nvmf_init_br" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:16.964 Cannot find device "nvmf_init_br2" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:16.964 Cannot find device "nvmf_tgt_br" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.964 Cannot find device "nvmf_tgt_br2" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:16.964 Cannot find device "nvmf_init_br" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:16.964 Cannot find device "nvmf_init_br2" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:16.964 Cannot find device "nvmf_tgt_br" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:16.964 Cannot find device "nvmf_tgt_br2" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:16.964 Cannot find device "nvmf_br" 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:16.964 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:17.222 Cannot find device "nvmf_init_if" 00:16:17.222 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:16:17.222 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:17.222 Cannot find device "nvmf_init_if2" 00:16:17.222 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:16:17.222 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.222 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:16:17.222 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.222 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.223 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:17.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:17.481 00:16:17.481 --- 10.0.0.3 ping statistics --- 00:16:17.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.481 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:17.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:17.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:16:17.481 00:16:17.481 --- 10.0.0.4 ping statistics --- 00:16:17.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.481 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:17.481 00:16:17.481 --- 10.0.0.1 ping statistics --- 00:16:17.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.481 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:17.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:17.481 00:16:17.481 --- 10.0.0.2 ping statistics --- 00:16:17.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.481 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:17.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74254 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74254 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74254 ']' 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.481 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:17.481 [2024-12-06 09:53:42.609466] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:17.481 [2024-12-06 09:53:42.609791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.740 [2024-12-06 09:53:42.763446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.740 [2024-12-06 09:53:42.826959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.740 [2024-12-06 09:53:42.827030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.740 [2024-12-06 09:53:42.827046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.740 [2024-12-06 09:53:42.827057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.740 [2024-12-06 09:53:42.827066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.740 [2024-12-06 09:53:42.828457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.740 [2024-12-06 09:53:42.828618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.740 [2024-12-06 09:53:42.828710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.740 [2024-12-06 09:53:42.828709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.740 [2024-12-06 09:53:42.885998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.740 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.740 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:16:17.740 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:17.740 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:17.740 09:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:17.740 09:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.740 09:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:17.740 09:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:18.309 09:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:18.309 09:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:18.568 09:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:18.569 09:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.828 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:18.828 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:18.828 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:18.828 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:18.828 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:19.087 [2024-12-06 09:53:44.295212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.087 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:19.655 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:19.655 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:19.655 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:19.655 09:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:19.914 09:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:20.173 [2024-12-06 09:53:45.345392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:20.173 09:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:20.430 09:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:20.430 09:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:20.430 09:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:20.430 09:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:21.801 Initializing NVMe Controllers 00:16:21.801 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:21.801 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:21.801 Initialization complete. Launching workers. 00:16:21.801 ======================================================== 00:16:21.801 Latency(us) 00:16:21.801 Device Information : IOPS MiB/s Average min max 00:16:21.801 PCIE (0000:00:10.0) NSID 1 from core 0: 21148.52 82.61 1514.50 292.82 8518.44 00:16:21.801 ======================================================== 00:16:21.801 Total : 21148.52 82.61 1514.50 292.82 8518.44 00:16:21.801 00:16:21.801 09:53:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:23.176 Initializing NVMe Controllers 00:16:23.176 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:23.176 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:23.176 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:23.176 Initialization complete. Launching workers. 00:16:23.176 ======================================================== 00:16:23.176 Latency(us) 00:16:23.176 Device Information : IOPS MiB/s Average min max 00:16:23.176 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3289.00 12.85 303.73 102.72 7233.41 00:16:23.176 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8169.58 6041.53 14957.73 00:16:23.176 ======================================================== 00:16:23.176 Total : 3412.00 13.33 587.29 102.72 14957.73 00:16:23.176 00:16:23.176 09:53:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:24.554 Initializing NVMe Controllers 00:16:24.554 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:24.554 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:24.554 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:24.554 Initialization complete. Launching workers. 00:16:24.554 ======================================================== 00:16:24.554 Latency(us) 00:16:24.554 Device Information : IOPS MiB/s Average min max 00:16:24.554 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8730.30 34.10 3666.97 450.98 10377.05 00:16:24.554 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3418.16 13.35 9422.44 5041.01 18238.69 00:16:24.554 ======================================================== 00:16:24.554 Total : 12148.46 47.45 5286.37 450.98 18238.69 00:16:24.554 00:16:24.554 09:53:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:24.554 09:53:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:27.084 Initializing NVMe Controllers 00:16:27.084 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:27.084 Controller IO queue size 128, less than required. 00:16:27.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:27.084 Controller IO queue size 128, less than required. 00:16:27.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:27.084 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:27.084 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:27.084 Initialization complete. Launching workers. 00:16:27.084 ======================================================== 00:16:27.084 Latency(us) 00:16:27.084 Device Information : IOPS MiB/s Average min max 00:16:27.084 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1539.73 384.93 84772.23 45454.89 214409.39 00:16:27.084 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 641.14 160.28 206698.93 31236.83 295530.89 00:16:27.084 ======================================================== 00:16:27.084 Total : 2180.86 545.22 120616.61 31236.83 295530.89 00:16:27.084 00:16:27.084 09:53:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:27.343 Initializing NVMe Controllers 00:16:27.343 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:27.343 Controller IO queue size 128, less than required. 00:16:27.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:27.343 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:27.343 Controller IO queue size 128, less than required. 00:16:27.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:27.343 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:27.343 WARNING: Some requested NVMe devices were skipped 00:16:27.343 No valid NVMe controllers or AIO or URING devices found 00:16:27.343 09:53:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:16:29.880 Initializing NVMe Controllers 00:16:29.880 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:29.880 Controller IO queue size 128, less than required. 00:16:29.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:29.880 Controller IO queue size 128, less than required. 00:16:29.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:29.880 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:29.880 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:29.880 Initialization complete. Launching workers. 00:16:29.880 00:16:29.880 ==================== 00:16:29.880 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:29.880 TCP transport: 00:16:29.880 polls: 10177 00:16:29.880 idle_polls: 5767 00:16:29.880 sock_completions: 4410 00:16:29.880 nvme_completions: 5919 00:16:29.880 submitted_requests: 8816 00:16:29.880 queued_requests: 1 00:16:29.880 00:16:29.880 ==================== 00:16:29.880 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:29.880 TCP transport: 00:16:29.880 polls: 10569 00:16:29.880 idle_polls: 6778 00:16:29.880 sock_completions: 3791 00:16:29.880 nvme_completions: 6247 00:16:29.880 submitted_requests: 9340 00:16:29.880 queued_requests: 1 00:16:29.880 ======================================================== 00:16:29.880 Latency(us) 00:16:29.880 Device Information : IOPS MiB/s Average min max 00:16:29.880 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1477.73 369.43 88154.75 52349.00 146811.36 00:16:29.880 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1559.64 389.91 83093.22 45067.17 126829.47 00:16:29.880 ======================================================== 00:16:29.880 Total : 3037.37 759.34 85555.74 45067.17 146811.36 00:16:29.880 00:16:29.880 09:53:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:29.880 09:53:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.138 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:30.138 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:30.138 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:30.138 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:30.138 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:16:30.138 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:30.139 rmmod nvme_tcp 00:16:30.139 rmmod nvme_fabrics 00:16:30.139 rmmod nvme_keyring 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74254 ']' 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74254 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74254 ']' 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74254 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74254 00:16:30.139 killing process with pid 74254 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74254' 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74254 00:16:30.139 09:53:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74254 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:16:31.074 00:16:31.074 real 0m14.367s 00:16:31.074 user 0m51.973s 00:16:31.074 sys 0m3.991s 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:31.074 ************************************ 00:16:31.074 END TEST nvmf_perf 00:16:31.074 ************************************ 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.074 ************************************ 00:16:31.074 START TEST nvmf_fio_host 00:16:31.074 ************************************ 00:16:31.074 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:31.335 * Looking for test storage... 00:16:31.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:31.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.335 --rc genhtml_branch_coverage=1 00:16:31.335 --rc genhtml_function_coverage=1 00:16:31.335 --rc genhtml_legend=1 00:16:31.335 --rc geninfo_all_blocks=1 00:16:31.335 --rc geninfo_unexecuted_blocks=1 00:16:31.335 00:16:31.335 ' 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:31.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.335 --rc genhtml_branch_coverage=1 00:16:31.335 --rc genhtml_function_coverage=1 00:16:31.335 --rc genhtml_legend=1 00:16:31.335 --rc geninfo_all_blocks=1 00:16:31.335 --rc geninfo_unexecuted_blocks=1 00:16:31.335 00:16:31.335 ' 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:31.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.335 --rc genhtml_branch_coverage=1 00:16:31.335 --rc genhtml_function_coverage=1 00:16:31.335 --rc genhtml_legend=1 00:16:31.335 --rc geninfo_all_blocks=1 00:16:31.335 --rc geninfo_unexecuted_blocks=1 00:16:31.335 00:16:31.335 ' 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:31.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.335 --rc genhtml_branch_coverage=1 00:16:31.335 --rc genhtml_function_coverage=1 00:16:31.335 --rc genhtml_legend=1 00:16:31.335 --rc geninfo_all_blocks=1 00:16:31.335 --rc geninfo_unexecuted_blocks=1 00:16:31.335 00:16:31.335 ' 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.335 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:31.336 Cannot find device "nvmf_init_br" 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:31.336 Cannot find device "nvmf_init_br2" 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:31.336 Cannot find device "nvmf_tgt_br" 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.336 Cannot find device "nvmf_tgt_br2" 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:16:31.336 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:31.596 Cannot find device "nvmf_init_br" 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:31.596 Cannot find device "nvmf_init_br2" 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:31.596 Cannot find device "nvmf_tgt_br" 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:31.596 Cannot find device "nvmf_tgt_br2" 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:31.596 Cannot find device "nvmf_br" 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:31.596 Cannot find device "nvmf_init_if" 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:31.596 Cannot find device "nvmf_init_if2" 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:31.596 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:31.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:31.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:16:31.597 00:16:31.597 --- 10.0.0.3 ping statistics --- 00:16:31.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.597 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:31.597 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:31.856 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:31.856 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:16:31.856 00:16:31.856 --- 10.0.0.4 ping statistics --- 00:16:31.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.856 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:31.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:31.856 00:16:31.856 --- 10.0.0.1 ping statistics --- 00:16:31.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.856 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:31.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:16:31.856 00:16:31.856 --- 10.0.0.2 ping statistics --- 00:16:31.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.856 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:31.856 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74710 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74710 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74710 ']' 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.857 09:53:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.857 [2024-12-06 09:53:56.979860] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:31.857 [2024-12-06 09:53:56.980611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.117 [2024-12-06 09:53:57.133427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.117 [2024-12-06 09:53:57.191998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.117 [2024-12-06 09:53:57.192061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.117 [2024-12-06 09:53:57.192080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.117 [2024-12-06 09:53:57.192091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.117 [2024-12-06 09:53:57.192100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.117 [2024-12-06 09:53:57.193420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.117 [2024-12-06 09:53:57.193615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.117 [2024-12-06 09:53:57.193692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.117 [2024-12-06 09:53:57.193696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.117 [2024-12-06 09:53:57.253014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:32.117 09:53:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.117 09:53:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:16:32.117 09:53:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:32.376 [2024-12-06 09:53:57.611723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.376 09:53:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:32.376 09:53:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:32.376 09:53:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 09:53:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:32.894 Malloc1 00:16:32.894 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:33.152 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.411 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:33.669 [2024-12-06 09:53:58.711931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:33.669 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:33.927 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:33.927 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:33.927 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:33.928 09:53:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:33.928 09:53:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:33.928 09:53:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:33.928 09:53:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:33.928 09:53:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:33.928 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:33.928 fio-3.35 00:16:33.928 Starting 1 thread 00:16:36.473 00:16:36.473 test: (groupid=0, jobs=1): err= 0: pid=74780: Fri Dec 6 09:54:01 2024 00:16:36.473 read: IOPS=8805, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2007msec) 00:16:36.473 slat (nsec): min=1728, max=394200, avg=2312.81, stdev=3997.59 00:16:36.473 clat (usec): min=2748, max=13253, avg=7570.43, stdev=781.69 00:16:36.473 lat (usec): min=2812, max=13255, avg=7572.74, stdev=781.58 00:16:36.473 clat percentiles (usec): 00:16:36.473 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6980], 00:16:36.473 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7635], 00:16:36.473 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8848], 00:16:36.473 | 99.00th=[10028], 99.50th=[10945], 99.90th=[12125], 99.95th=[12387], 00:16:36.473 | 99.99th=[13173] 00:16:36.473 bw ( KiB/s): min=33296, max=36664, per=100.00%, avg=35222.00, stdev=1481.90, samples=4 00:16:36.473 iops : min= 8324, max= 9166, avg=8805.50, stdev=370.47, samples=4 00:16:36.473 write: IOPS=8817, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2007msec); 0 zone resets 00:16:36.473 slat (nsec): min=1812, max=266383, avg=2397.37, stdev=2652.04 00:16:36.473 clat (usec): min=2601, max=12843, avg=6904.19, stdev=719.61 00:16:36.473 lat (usec): min=2615, max=12846, avg=6906.59, stdev=719.58 00:16:36.473 clat percentiles (usec): 00:16:36.473 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:16:36.473 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 6980], 00:16:36.473 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8029], 00:16:36.473 | 99.00th=[ 9110], 99.50th=[10159], 99.90th=[11469], 99.95th=[12518], 00:16:36.473 | 99.99th=[12780] 00:16:36.473 bw ( KiB/s): min=34192, max=36224, per=99.98%, avg=35260.00, stdev=911.26, samples=4 00:16:36.473 iops : min= 8548, max= 9056, avg=8815.00, stdev=227.82, samples=4 00:16:36.473 lat (msec) : 4=0.07%, 10=99.14%, 20=0.79% 00:16:36.473 cpu : usr=69.09%, sys=23.53%, ctx=6, majf=0, minf=7 00:16:36.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:36.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.473 issued rwts: total=17672,17696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.473 00:16:36.473 Run status group 0 (all jobs): 00:16:36.473 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.4MB), run=2007-2007msec 00:16:36.473 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.5MB), run=2007-2007msec 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:36.473 09:54:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:36.473 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:36.473 fio-3.35 00:16:36.473 Starting 1 thread 00:16:39.007 00:16:39.007 test: (groupid=0, jobs=1): err= 0: pid=74823: Fri Dec 6 09:54:03 2024 00:16:39.007 read: IOPS=7960, BW=124MiB/s (130MB/s)(250MiB/2009msec) 00:16:39.007 slat (usec): min=2, max=118, avg= 3.42, stdev= 2.30 00:16:39.007 clat (usec): min=2357, max=17108, avg=8913.87, stdev=2487.10 00:16:39.007 lat (usec): min=2360, max=17111, avg=8917.29, stdev=2487.05 00:16:39.007 clat percentiles (usec): 00:16:39.007 | 1.00th=[ 4293], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6783], 00:16:39.007 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9372], 00:16:39.007 | 70.00th=[10028], 80.00th=[10683], 90.00th=[12256], 95.00th=[13698], 00:16:39.007 | 99.00th=[15533], 99.50th=[16057], 99.90th=[16712], 99.95th=[16909], 00:16:39.007 | 99.99th=[17171] 00:16:39.007 bw ( KiB/s): min=57888, max=73504, per=51.54%, avg=65648.00, stdev=8343.62, samples=4 00:16:39.007 iops : min= 3618, max= 4594, avg=4103.00, stdev=521.48, samples=4 00:16:39.007 write: IOPS=4639, BW=72.5MiB/s (76.0MB/s)(134MiB/1851msec); 0 zone resets 00:16:39.007 slat (usec): min=30, max=337, avg=34.65, stdev= 8.93 00:16:39.007 clat (usec): min=4157, max=21466, avg=12454.77, stdev=2403.07 00:16:39.007 lat (usec): min=4188, max=21509, avg=12489.42, stdev=2402.92 00:16:39.007 clat percentiles (usec): 00:16:39.007 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10290], 00:16:39.007 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12125], 60.00th=[12911], 00:16:39.007 | 70.00th=[13698], 80.00th=[14615], 90.00th=[15795], 95.00th=[16581], 00:16:39.007 | 99.00th=[18482], 99.50th=[19006], 99.90th=[20579], 99.95th=[21103], 00:16:39.007 | 99.99th=[21365] 00:16:39.007 bw ( KiB/s): min=59840, max=75776, per=91.86%, avg=68184.00, stdev=8735.60, samples=4 00:16:39.007 iops : min= 3740, max= 4736, avg=4261.50, stdev=545.98, samples=4 00:16:39.007 lat (msec) : 4=0.35%, 10=51.11%, 20=48.47%, 50=0.07% 00:16:39.007 cpu : usr=79.53%, sys=16.33%, ctx=3, majf=0, minf=8 00:16:39.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:39.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:39.007 issued rwts: total=15992,8587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.007 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:39.007 00:16:39.007 Run status group 0 (all jobs): 00:16:39.007 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=250MiB (262MB), run=2009-2009msec 00:16:39.007 WRITE: bw=72.5MiB/s (76.0MB/s), 72.5MiB/s-72.5MiB/s (76.0MB/s-76.0MB/s), io=134MiB (141MB), run=1851-1851msec 00:16:39.007 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:39.266 rmmod nvme_tcp 00:16:39.266 rmmod nvme_fabrics 00:16:39.266 rmmod nvme_keyring 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74710 ']' 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74710 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74710 ']' 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74710 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74710 00:16:39.266 killing process with pid 74710 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74710' 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74710 00:16:39.266 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74710 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:39.525 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:39.785 ************************************ 00:16:39.785 END TEST nvmf_fio_host 00:16:39.785 ************************************ 00:16:39.785 00:16:39.785 real 0m8.654s 00:16:39.785 user 0m34.243s 00:16:39.785 sys 0m2.531s 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.785 09:54:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.785 09:54:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:39.785 09:54:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.785 09:54:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.785 09:54:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.785 ************************************ 00:16:39.785 START TEST nvmf_failover 00:16:39.785 ************************************ 00:16:39.785 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:40.045 * Looking for test storage... 00:16:40.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:40.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.045 --rc genhtml_branch_coverage=1 00:16:40.045 --rc genhtml_function_coverage=1 00:16:40.045 --rc genhtml_legend=1 00:16:40.045 --rc geninfo_all_blocks=1 00:16:40.045 --rc geninfo_unexecuted_blocks=1 00:16:40.045 00:16:40.045 ' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:40.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.045 --rc genhtml_branch_coverage=1 00:16:40.045 --rc genhtml_function_coverage=1 00:16:40.045 --rc genhtml_legend=1 00:16:40.045 --rc geninfo_all_blocks=1 00:16:40.045 --rc geninfo_unexecuted_blocks=1 00:16:40.045 00:16:40.045 ' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:40.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.045 --rc genhtml_branch_coverage=1 00:16:40.045 --rc genhtml_function_coverage=1 00:16:40.045 --rc genhtml_legend=1 00:16:40.045 --rc geninfo_all_blocks=1 00:16:40.045 --rc geninfo_unexecuted_blocks=1 00:16:40.045 00:16:40.045 ' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:40.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.045 --rc genhtml_branch_coverage=1 00:16:40.045 --rc genhtml_function_coverage=1 00:16:40.045 --rc genhtml_legend=1 00:16:40.045 --rc geninfo_all_blocks=1 00:16:40.045 --rc geninfo_unexecuted_blocks=1 00:16:40.045 00:16:40.045 ' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:40.045 Cannot find device "nvmf_init_br" 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:40.045 Cannot find device "nvmf_init_br2" 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:40.045 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:40.045 Cannot find device "nvmf_tgt_br" 00:16:40.046 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:40.046 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.046 Cannot find device "nvmf_tgt_br2" 00:16:40.046 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:40.046 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:40.046 Cannot find device "nvmf_init_br" 00:16:40.046 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:40.046 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:40.304 Cannot find device "nvmf_init_br2" 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:40.304 Cannot find device "nvmf_tgt_br" 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:40.304 Cannot find device "nvmf_tgt_br2" 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:40.304 Cannot find device "nvmf_br" 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:40.304 Cannot find device "nvmf_init_if" 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:40.304 Cannot find device "nvmf_init_if2" 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:40.304 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.305 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:40.305 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.305 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.305 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.305 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:40.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:40.563 00:16:40.563 --- 10.0.0.3 ping statistics --- 00:16:40.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.563 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:40.563 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:40.563 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:16:40.563 00:16:40.563 --- 10.0.0.4 ping statistics --- 00:16:40.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.563 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:40.563 00:16:40.563 --- 10.0.0.1 ping statistics --- 00:16:40.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.563 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:40.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:40.563 00:16:40.563 --- 10.0.0.2 ping statistics --- 00:16:40.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.563 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75092 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75092 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75092 ']' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.563 09:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 [2024-12-06 09:54:05.695526] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:40.563 [2024-12-06 09:54:05.695651] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.821 [2024-12-06 09:54:05.845229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.821 [2024-12-06 09:54:05.897658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.821 [2024-12-06 09:54:05.897722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.821 [2024-12-06 09:54:05.897732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.821 [2024-12-06 09:54:05.897741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.821 [2024-12-06 09:54:05.897747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.821 [2024-12-06 09:54:05.899058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.821 [2024-12-06 09:54:05.899216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.821 [2024-12-06 09:54:05.899225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.821 [2024-12-06 09:54:05.973202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.821 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.821 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:40.821 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.821 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.821 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:41.079 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.079 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:41.079 [2024-12-06 09:54:06.338522] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.336 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:41.594 Malloc0 00:16:41.594 09:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:41.852 09:54:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.111 09:54:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:42.369 [2024-12-06 09:54:07.497968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:42.369 09:54:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:42.628 [2024-12-06 09:54:07.754379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:42.628 09:54:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:42.887 [2024-12-06 09:54:08.062909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75147 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75147 /var/tmp/bdevperf.sock 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75147 ']' 00:16:42.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.887 09:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:44.262 09:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.262 09:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:44.262 09:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:44.262 NVMe0n1 00:16:44.262 09:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:44.827 00:16:44.827 09:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75171 00:16:44.827 09:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:44.827 09:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:45.760 09:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:46.018 09:54:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:49.297 09:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:49.297 00:16:49.297 09:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:49.865 09:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:53.149 09:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:53.149 [2024-12-06 09:54:18.117736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:53.149 09:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:54.085 09:54:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:54.344 09:54:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75171 00:17:00.921 { 00:17:00.921 "results": [ 00:17:00.921 { 00:17:00.921 "job": "NVMe0n1", 00:17:00.921 "core_mask": "0x1", 00:17:00.921 "workload": "verify", 00:17:00.921 "status": "finished", 00:17:00.921 "verify_range": { 00:17:00.921 "start": 0, 00:17:00.921 "length": 16384 00:17:00.921 }, 00:17:00.921 "queue_depth": 128, 00:17:00.921 "io_size": 4096, 00:17:00.921 "runtime": 15.009908, 00:17:00.921 "iops": 8528.10023885556, 00:17:00.921 "mibps": 33.312891558029534, 00:17:00.921 "io_failed": 3237, 00:17:00.921 "io_timeout": 0, 00:17:00.921 "avg_latency_us": 14607.245685359498, 00:17:00.921 "min_latency_us": 610.6763636363636, 00:17:00.921 "max_latency_us": 16205.265454545455 00:17:00.921 } 00:17:00.921 ], 00:17:00.921 "core_count": 1 00:17:00.921 } 00:17:00.921 09:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75147 00:17:00.921 09:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75147 ']' 00:17:00.921 09:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75147 00:17:00.921 09:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:00.921 09:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.921 09:54:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75147 00:17:00.921 killing process with pid 75147 00:17:00.921 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.921 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.921 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75147' 00:17:00.921 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75147 00:17:00.921 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75147 00:17:00.921 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:00.921 [2024-12-06 09:54:08.150423] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:00.921 [2024-12-06 09:54:08.150598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75147 ] 00:17:00.921 [2024-12-06 09:54:08.305145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.921 [2024-12-06 09:54:08.371622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.921 [2024-12-06 09:54:08.438153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:00.921 Running I/O for 15 seconds... 00:17:00.921 6933.00 IOPS, 27.08 MiB/s [2024-12-06T09:54:26.193Z] [2024-12-06 09:54:11.113640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.921 [2024-12-06 09:54:11.113713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.921 [2024-12-06 09:54:11.113750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.921 [2024-12-06 09:54:11.113763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.921 [2024-12-06 09:54:11.113777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.921 [2024-12-06 09:54:11.113789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.921 [2024-12-06 09:54:11.113803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.921 [2024-12-06 09:54:11.113816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.921 [2024-12-06 09:54:11.113829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d2c60 is same with the state(6) to be set 00:17:00.922 [2024-12-06 09:54:11.114073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.922 [2024-12-06 09:54:11.114100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.114948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.114963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.115018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.115054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.115082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.115161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.115204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.115237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.115268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.922 [2024-12-06 09:54:11.115299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.922 [2024-12-06 09:54:11.115314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.115972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.115987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.923 [2024-12-06 09:54:11.116556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.923 [2024-12-06 09:54:11.116570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.116960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.116974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.924 [2024-12-06 09:54:11.117653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.924 [2024-12-06 09:54:11.117675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:11.117690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:11.117723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:11.117753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:11.117781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.117810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.117838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.117867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.117895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.117923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.117951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.117981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.117996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.118009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.118043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.118073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.118101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.118130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.118158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.118192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.925 [2024-12-06 09:54:11.118221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:11.118250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1943b00 is same with the state(6) to be set 00:17:00.925 [2024-12-06 09:54:11.118280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.925 [2024-12-06 09:54:11.118291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.925 [2024-12-06 09:54:11.118301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:17:00.925 [2024-12-06 09:54:11.118313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:11.118375] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:00.925 [2024-12-06 09:54:11.118395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:00.925 [2024-12-06 09:54:11.122120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:00.925 [2024-12-06 09:54:11.122157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d2c60 (9): Bad file descriptor 00:17:00.925 [2024-12-06 09:54:11.144517] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:00.925 7685.00 IOPS, 30.02 MiB/s [2024-12-06T09:54:26.197Z] 8150.00 IOPS, 31.84 MiB/s [2024-12-06T09:54:26.197Z] 8172.50 IOPS, 31.92 MiB/s [2024-12-06T09:54:26.197Z] [2024-12-06 09:54:14.818212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.925 [2024-12-06 09:54:14.818830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.925 [2024-12-06 09:54:14.818845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.818860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.818875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.818891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.818918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.818936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.818959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.818976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.818991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.819927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.819972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.819996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.820018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.820051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.820082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.820113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.820152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.820183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.926 [2024-12-06 09:54:14.820214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.820245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.820277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.820309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.926 [2024-12-06 09:54:14.820334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.926 [2024-12-06 09:54:14.820350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.820382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.820431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.820481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.820555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.820600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.820661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.820695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.820734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.820766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.820797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.820828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.820859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.820901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.820933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.820979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.820995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.927 [2024-12-06 09:54:14.821689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.927 [2024-12-06 09:54:14.821811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.927 [2024-12-06 09:54:14.821825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.821840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.821855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.821871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.821886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.821901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.821916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.821931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.821946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.821962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.821976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.928 [2024-12-06 09:54:14.822234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.928 [2024-12-06 09:54:14.822263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.928 [2024-12-06 09:54:14.822298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.928 [2024-12-06 09:54:14.822327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.928 [2024-12-06 09:54:14.822357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.928 [2024-12-06 09:54:14.822386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.928 [2024-12-06 09:54:14.822415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.928 [2024-12-06 09:54:14.822444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.928 [2024-12-06 09:54:14.822715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81790 is same with the state(6) to be set 00:17:00.928 [2024-12-06 09:54:14.822749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.928 [2024-12-06 09:54:14.822760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.928 [2024-12-06 09:54:14.822771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65272 len:8 PRP1 0x0 PRP2 0x0 00:17:00.928 [2024-12-06 09:54:14.822785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.928 [2024-12-06 09:54:14.822817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.928 [2024-12-06 09:54:14.822829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65792 len:8 PRP1 0x0 PRP2 0x0 00:17:00.928 [2024-12-06 09:54:14.822843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.928 [2024-12-06 09:54:14.822868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.928 [2024-12-06 09:54:14.822879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65800 len:8 PRP1 0x0 PRP2 0x0 00:17:00.928 [2024-12-06 09:54:14.822893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.928 [2024-12-06 09:54:14.822918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.928 [2024-12-06 09:54:14.822929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65808 len:8 PRP1 0x0 PRP2 0x0 00:17:00.928 [2024-12-06 09:54:14.822943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.822957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.928 [2024-12-06 09:54:14.822968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.928 [2024-12-06 09:54:14.822998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65816 len:8 PRP1 0x0 PRP2 0x0 00:17:00.928 [2024-12-06 09:54:14.823014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.823029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.928 [2024-12-06 09:54:14.823051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.928 [2024-12-06 09:54:14.823078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65824 len:8 PRP1 0x0 PRP2 0x0 00:17:00.928 [2024-12-06 09:54:14.823092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.823106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.928 [2024-12-06 09:54:14.823117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.928 [2024-12-06 09:54:14.823151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65832 len:8 PRP1 0x0 PRP2 0x0 00:17:00.928 [2024-12-06 09:54:14.823166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.928 [2024-12-06 09:54:14.823181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.929 [2024-12-06 09:54:14.823192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.929 [2024-12-06 09:54:14.823203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65840 len:8 PRP1 0x0 PRP2 0x0 00:17:00.929 [2024-12-06 09:54:14.823217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:14.823230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.929 [2024-12-06 09:54:14.823241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.929 [2024-12-06 09:54:14.823252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65848 len:8 PRP1 0x0 PRP2 0x0 00:17:00.929 [2024-12-06 09:54:14.823266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:14.823328] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:17:00.929 [2024-12-06 09:54:14.823396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.929 [2024-12-06 09:54:14.823431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:14.823446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.929 [2024-12-06 09:54:14.823464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:14.823504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.929 [2024-12-06 09:54:14.823517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:14.823531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.929 [2024-12-06 09:54:14.823544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:14.823558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:00.929 [2024-12-06 09:54:14.823635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d2c60 (9): Bad file descriptor 00:17:00.929 [2024-12-06 09:54:14.827784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:00.929 [2024-12-06 09:54:14.856230] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:17:00.929 8121.40 IOPS, 31.72 MiB/s [2024-12-06T09:54:26.201Z] 8099.83 IOPS, 31.64 MiB/s [2024-12-06T09:54:26.201Z] 8081.86 IOPS, 31.57 MiB/s [2024-12-06T09:54:26.201Z] 8054.88 IOPS, 31.46 MiB/s [2024-12-06T09:54:26.201Z] 8061.67 IOPS, 31.49 MiB/s [2024-12-06T09:54:26.201Z] [2024-12-06 09:54:19.409762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.409830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.409860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.409876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.409892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.409907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.409922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.409948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.409963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.409992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.929 [2024-12-06 09:54:19.410110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.929 [2024-12-06 09:54:19.410139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.929 [2024-12-06 09:54:19.410172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.929 [2024-12-06 09:54:19.410230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.929 [2024-12-06 09:54:19.410269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.929 [2024-12-06 09:54:19.410297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.929 [2024-12-06 09:54:19.410336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.929 [2024-12-06 09:54:19.410363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.929 [2024-12-06 09:54:19.410617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.929 [2024-12-06 09:54:19.410631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.410966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.410990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.930 [2024-12-06 09:54:19.411936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.411966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.930 [2024-12-06 09:54:19.411980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.930 [2024-12-06 09:54:19.412004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:00.931 [2024-12-06 09:54:19.412878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.412973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.412998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.413013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.413028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.413041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.413056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.413070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.413086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.413107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.413123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.413137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.413152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.413165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.413180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.413194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.931 [2024-12-06 09:54:19.413209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.931 [2024-12-06 09:54:19.413223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.932 [2024-12-06 09:54:19.413251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.932 [2024-12-06 09:54:19.413280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.932 [2024-12-06 09:54:19.413321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.932 [2024-12-06 09:54:19.413360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81450 is same with the state(6) to be set 00:17:00.932 [2024-12-06 09:54:19.413390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104480 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105040 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105048 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105056 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105064 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105072 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105080 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105088 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105096 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105104 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.413930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.413940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105112 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.413954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.413967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105120 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104488 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104496 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104504 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104512 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104520 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104528 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104536 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:00.932 [2024-12-06 09:54:19.414401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:00.932 [2024-12-06 09:54:19.414411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104544 len:8 PRP1 0x0 PRP2 0x0 00:17:00.932 [2024-12-06 09:54:19.414424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.932 [2024-12-06 09:54:19.414485] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:17:00.932 [2024-12-06 09:54:19.414542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.932 [2024-12-06 09:54:19.414564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.933 [2024-12-06 09:54:19.414578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.933 [2024-12-06 09:54:19.414591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.933 [2024-12-06 09:54:19.414621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.933 [2024-12-06 09:54:19.414636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.933 [2024-12-06 09:54:19.414650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.933 [2024-12-06 09:54:19.414662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.933 [2024-12-06 09:54:19.414675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:17:00.933 [2024-12-06 09:54:19.414708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d2c60 (9): Bad file descriptor 00:17:00.933 [2024-12-06 09:54:19.418271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:17:00.933 [2024-12-06 09:54:19.443418] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:17:00.933 8129.60 IOPS, 31.76 MiB/s [2024-12-06T09:54:26.205Z] 8285.64 IOPS, 32.37 MiB/s [2024-12-06T09:54:26.205Z] 8391.33 IOPS, 32.78 MiB/s [2024-12-06T09:54:26.205Z] 8411.62 IOPS, 32.86 MiB/s [2024-12-06T09:54:26.205Z] 8473.14 IOPS, 33.10 MiB/s [2024-12-06T09:54:26.205Z] 8527.40 IOPS, 33.31 MiB/s 00:17:00.933 Latency(us) 00:17:00.933 [2024-12-06T09:54:26.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.933 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:00.933 Verification LBA range: start 0x0 length 0x4000 00:17:00.933 NVMe0n1 : 15.01 8528.10 33.31 215.66 0.00 14607.25 610.68 16205.27 00:17:00.933 [2024-12-06T09:54:26.205Z] =================================================================================================================== 00:17:00.933 [2024-12-06T09:54:26.205Z] Total : 8528.10 33.31 215.66 0.00 14607.25 610.68 16205.27 00:17:00.933 Received shutdown signal, test time was about 15.000000 seconds 00:17:00.933 00:17:00.933 Latency(us) 00:17:00.933 [2024-12-06T09:54:26.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.933 [2024-12-06T09:54:26.205Z] =================================================================================================================== 00:17:00.933 [2024-12-06T09:54:26.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:00.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75345 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75345 /var/tmp/bdevperf.sock 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75345 ']' 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.933 09:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:01.197 09:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.197 09:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:01.197 09:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:01.456 [2024-12-06 09:54:26.719958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:01.713 09:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:01.970 [2024-12-06 09:54:27.068487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:01.970 09:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:02.227 NVMe0n1 00:17:02.227 09:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:02.794 00:17:02.794 09:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:03.053 00:17:03.053 09:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:03.053 09:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:03.312 09:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:03.571 09:54:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:06.851 09:54:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:06.851 09:54:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:06.851 09:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75433 00:17:06.851 09:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:06.851 09:54:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75433 00:17:08.226 { 00:17:08.226 "results": [ 00:17:08.226 { 00:17:08.226 "job": "NVMe0n1", 00:17:08.226 "core_mask": "0x1", 00:17:08.226 "workload": "verify", 00:17:08.226 "status": "finished", 00:17:08.226 "verify_range": { 00:17:08.226 "start": 0, 00:17:08.226 "length": 16384 00:17:08.226 }, 00:17:08.226 "queue_depth": 128, 00:17:08.226 "io_size": 4096, 00:17:08.226 "runtime": 1.011718, 00:17:08.226 "iops": 8666.446578987425, 00:17:08.226 "mibps": 33.85330694916963, 00:17:08.226 "io_failed": 0, 00:17:08.226 "io_timeout": 0, 00:17:08.226 "avg_latency_us": 14684.491254147313, 00:17:08.226 "min_latency_us": 3172.538181818182, 00:17:08.226 "max_latency_us": 14298.763636363636 00:17:08.226 } 00:17:08.226 ], 00:17:08.226 "core_count": 1 00:17:08.226 } 00:17:08.226 09:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:08.226 [2024-12-06 09:54:25.301228] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:08.226 [2024-12-06 09:54:25.301357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75345 ] 00:17:08.226 [2024-12-06 09:54:25.448489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.226 [2024-12-06 09:54:25.519617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.226 [2024-12-06 09:54:25.586257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.226 [2024-12-06 09:54:28.715218] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:08.226 [2024-12-06 09:54:28.715329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.226 [2024-12-06 09:54:28.715354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.226 [2024-12-06 09:54:28.715372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.226 [2024-12-06 09:54:28.715386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.226 [2024-12-06 09:54:28.715399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.226 [2024-12-06 09:54:28.715422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.226 [2024-12-06 09:54:28.715435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:08.226 [2024-12-06 09:54:28.715448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.226 [2024-12-06 09:54:28.715476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:17:08.227 [2024-12-06 09:54:28.715523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:17:08.227 [2024-12-06 09:54:28.715565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1586c60 (9): Bad file descriptor 00:17:08.227 [2024-12-06 09:54:28.720476] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:17:08.227 Running I/O for 1 seconds... 00:17:08.227 8626.00 IOPS, 33.70 MiB/s 00:17:08.227 Latency(us) 00:17:08.227 [2024-12-06T09:54:33.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.227 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:08.227 Verification LBA range: start 0x0 length 0x4000 00:17:08.227 NVMe0n1 : 1.01 8666.45 33.85 0.00 0.00 14684.49 3172.54 14298.76 00:17:08.227 [2024-12-06T09:54:33.499Z] =================================================================================================================== 00:17:08.227 [2024-12-06T09:54:33.499Z] Total : 8666.45 33.85 0.00 0.00 14684.49 3172.54 14298.76 00:17:08.227 09:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:08.227 09:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:08.485 09:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:08.485 09:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:08.485 09:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:08.743 09:54:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:09.308 09:54:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75345 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75345 ']' 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75345 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75345 00:17:12.593 killing process with pid 75345 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75345' 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75345 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75345 00:17:12.593 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:12.852 09:54:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:13.112 rmmod nvme_tcp 00:17:13.112 rmmod nvme_fabrics 00:17:13.112 rmmod nvme_keyring 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75092 ']' 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75092 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75092 ']' 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75092 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75092 00:17:13.112 killing process with pid 75092 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75092' 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75092 00:17:13.112 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75092 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:13.372 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:17:13.632 ************************************ 00:17:13.632 END TEST nvmf_failover 00:17:13.632 ************************************ 00:17:13.632 00:17:13.632 real 0m33.841s 00:17:13.632 user 2m11.207s 00:17:13.632 sys 0m5.791s 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.632 09:54:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 09:54:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:13.913 09:54:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.913 09:54:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.913 09:54:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 ************************************ 00:17:13.913 START TEST nvmf_host_discovery 00:17:13.913 ************************************ 00:17:13.913 09:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:13.913 * Looking for test storage... 00:17:13.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.913 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:13.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.914 --rc genhtml_branch_coverage=1 00:17:13.914 --rc genhtml_function_coverage=1 00:17:13.914 --rc genhtml_legend=1 00:17:13.914 --rc geninfo_all_blocks=1 00:17:13.914 --rc geninfo_unexecuted_blocks=1 00:17:13.914 00:17:13.914 ' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:13.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.914 --rc genhtml_branch_coverage=1 00:17:13.914 --rc genhtml_function_coverage=1 00:17:13.914 --rc genhtml_legend=1 00:17:13.914 --rc geninfo_all_blocks=1 00:17:13.914 --rc geninfo_unexecuted_blocks=1 00:17:13.914 00:17:13.914 ' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:13.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.914 --rc genhtml_branch_coverage=1 00:17:13.914 --rc genhtml_function_coverage=1 00:17:13.914 --rc genhtml_legend=1 00:17:13.914 --rc geninfo_all_blocks=1 00:17:13.914 --rc geninfo_unexecuted_blocks=1 00:17:13.914 00:17:13.914 ' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:13.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.914 --rc genhtml_branch_coverage=1 00:17:13.914 --rc genhtml_function_coverage=1 00:17:13.914 --rc genhtml_legend=1 00:17:13.914 --rc geninfo_all_blocks=1 00:17:13.914 --rc geninfo_unexecuted_blocks=1 00:17:13.914 00:17:13.914 ' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.914 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:13.915 Cannot find device "nvmf_init_br" 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:13.915 Cannot find device "nvmf_init_br2" 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:13.915 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:13.915 Cannot find device "nvmf_tgt_br" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.175 Cannot find device "nvmf_tgt_br2" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:14.175 Cannot find device "nvmf_init_br" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:14.175 Cannot find device "nvmf_init_br2" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:14.175 Cannot find device "nvmf_tgt_br" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:14.175 Cannot find device "nvmf_tgt_br2" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:14.175 Cannot find device "nvmf_br" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:14.175 Cannot find device "nvmf_init_if" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:14.175 Cannot find device "nvmf_init_if2" 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.175 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:14.434 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.434 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:17:14.434 00:17:14.434 --- 10.0.0.3 ping statistics --- 00:17:14.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.434 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:14.434 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:14.434 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:17:14.434 00:17:14.434 --- 10.0.0.4 ping statistics --- 00:17:14.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.434 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:14.434 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:14.434 00:17:14.434 --- 10.0.0.1 ping statistics --- 00:17:14.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.434 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:14.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:17:14.435 00:17:14.435 --- 10.0.0.2 ping statistics --- 00:17:14.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.435 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75754 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75754 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75754 ']' 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.435 09:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.435 [2024-12-06 09:54:39.574251] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:14.435 [2024-12-06 09:54:39.574344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.694 [2024-12-06 09:54:39.729765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.694 [2024-12-06 09:54:39.818741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.694 [2024-12-06 09:54:39.819085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.694 [2024-12-06 09:54:39.819215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.694 [2024-12-06 09:54:39.819330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.694 [2024-12-06 09:54:39.819440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.694 [2024-12-06 09:54:39.820097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.694 [2024-12-06 09:54:39.898348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:15.262 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.262 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:15.262 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.262 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.262 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.522 [2024-12-06 09:54:40.566709] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.522 [2024-12-06 09:54:40.574848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.522 null0 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.522 null1 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.522 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75786 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75786 /tmp/host.sock 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75786 ']' 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.522 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.522 [2024-12-06 09:54:40.648794] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:15.522 [2024-12-06 09:54:40.649089] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75786 ] 00:17:15.522 [2024-12-06 09:54:40.785007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.782 [2024-12-06 09:54:40.834258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.782 [2024-12-06 09:54:40.892999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:15.782 09:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:15.782 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.042 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.302 [2024-12-06 09:54:41.331053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:17:16.302 09:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:17:16.869 [2024-12-06 09:54:41.979906] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:16.869 [2024-12-06 09:54:41.980060] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:16.869 [2024-12-06 09:54:41.980112] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:16.869 [2024-12-06 09:54:41.985944] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:16.869 [2024-12-06 09:54:42.040264] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:16.869 [2024-12-06 09:54:42.041254] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2461da0:1 started. 00:17:16.869 [2024-12-06 09:54:42.042989] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:16.869 [2024-12-06 09:54:42.043163] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:16.869 [2024-12-06 09:54:42.048744] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2461da0 was disconnected and freed. delete nvme_qpair. 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.437 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:17.438 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:17.438 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:17.438 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:17.438 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.438 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.438 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:17.438 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:17.438 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:17.698 [2024-12-06 09:54:42.822477] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2470190:1 started. 00:17:17.698 [2024-12-06 09:54:42.828887] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2470190 was disconnected and freed. delete nvme_qpair. 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.698 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.699 [2024-12-06 09:54:42.932334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:17.699 [2024-12-06 09:54:42.933363] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:17.699 [2024-12-06 09:54:42.933404] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:17.699 [2024-12-06 09:54:42.939324] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:17.699 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.959 09:54:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.959 [2024-12-06 09:54:43.003029] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:17:17.959 [2024-12-06 09:54:43.003096] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:17.959 [2024-12-06 09:54:43.003108] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:17.959 [2024-12-06 09:54:43.003114] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.959 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.960 [2024-12-06 09:54:43.169528] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:17.960 [2024-12-06 09:54:43.169779] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:17.960 [2024-12-06 09:54:43.170482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.960 [2024-12-06 09:54:43.170516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.960 [2024-12-06 09:54:43.170545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.960 [2024-12-06 09:54:43.170553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.960 [2024-12-06 09:54:43.170562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.960 [2024-12-06 09:54:43.170572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.960 [2024-12-06 09:54:43.170610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.960 [2024-12-06 09:54:43.170621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.960 [2024-12-06 09:54:43.170630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dfb0 is same with the state(6) to be set 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:17.960 [2024-12-06 09:54:43.175735] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:17:17.960 [2024-12-06 09:54:43.175767] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:17.960 [2024-12-06 09:54:43.175826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243dfb0 (9): Bad file descriptor 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.960 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:18.220 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.221 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.480 09:54:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.417 [2024-12-06 09:54:44.600505] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:19.417 [2024-12-06 09:54:44.600543] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:19.417 [2024-12-06 09:54:44.600578] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:19.417 [2024-12-06 09:54:44.606537] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:17:19.417 [2024-12-06 09:54:44.664890] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:17:19.417 [2024-12-06 09:54:44.665829] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2437dd0:1 started. 00:17:19.417 [2024-12-06 09:54:44.668045] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:19.417 [2024-12-06 09:54:44.668104] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.417 [2024-12-06 09:54:44.669505] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2437dd0 was disconnected and freed. delete nvme_qpair. 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.417 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.417 request: 00:17:19.417 { 00:17:19.417 "name": "nvme", 00:17:19.417 "trtype": "tcp", 00:17:19.417 "traddr": "10.0.0.3", 00:17:19.417 "adrfam": "ipv4", 00:17:19.417 "trsvcid": "8009", 00:17:19.417 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:19.417 "wait_for_attach": true, 00:17:19.417 "method": "bdev_nvme_start_discovery", 00:17:19.417 "req_id": 1 00:17:19.417 } 00:17:19.417 Got JSON-RPC error response 00:17:19.417 response: 00:17:19.417 { 00:17:19.417 "code": -17, 00:17:19.417 "message": "File exists" 00:17:19.417 } 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 request: 00:17:19.678 { 00:17:19.678 "name": "nvme_second", 00:17:19.678 "trtype": "tcp", 00:17:19.678 "traddr": "10.0.0.3", 00:17:19.678 "adrfam": "ipv4", 00:17:19.678 "trsvcid": "8009", 00:17:19.678 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:19.678 "wait_for_attach": true, 00:17:19.678 "method": "bdev_nvme_start_discovery", 00:17:19.678 "req_id": 1 00:17:19.678 } 00:17:19.678 Got JSON-RPC error response 00:17:19.678 response: 00:17:19.678 { 00:17:19.678 "code": -17, 00:17:19.678 "message": "File exists" 00:17:19.678 } 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:19.678 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:19.679 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:19.679 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.679 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.679 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.679 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.679 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:19.679 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.679 09:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.056 [2024-12-06 09:54:45.952447] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.056 [2024-12-06 09:54:45.952522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2439bb0 with addr=10.0.0.3, port=8010 00:17:21.056 [2024-12-06 09:54:45.952547] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:21.056 [2024-12-06 09:54:45.952558] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:21.056 [2024-12-06 09:54:45.952567] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:21.994 [2024-12-06 09:54:46.952432] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.994 [2024-12-06 09:54:46.952519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2439bb0 with addr=10.0.0.3, port=8010 00:17:21.994 [2024-12-06 09:54:46.952544] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:21.994 [2024-12-06 09:54:46.952554] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:21.994 [2024-12-06 09:54:46.952563] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:22.932 [2024-12-06 09:54:47.952304] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:17:22.932 request: 00:17:22.932 { 00:17:22.932 "name": "nvme_second", 00:17:22.932 "trtype": "tcp", 00:17:22.932 "traddr": "10.0.0.3", 00:17:22.932 "adrfam": "ipv4", 00:17:22.932 "trsvcid": "8010", 00:17:22.932 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:22.932 "wait_for_attach": false, 00:17:22.932 "attach_timeout_ms": 3000, 00:17:22.932 "method": "bdev_nvme_start_discovery", 00:17:22.932 "req_id": 1 00:17:22.932 } 00:17:22.932 Got JSON-RPC error response 00:17:22.932 response: 00:17:22.932 { 00:17:22.932 "code": -110, 00:17:22.932 "message": "Connection timed out" 00:17:22.932 } 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:22.932 09:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.932 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:22.932 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:22.932 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75786 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.933 rmmod nvme_tcp 00:17:22.933 rmmod nvme_fabrics 00:17:22.933 rmmod nvme_keyring 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75754 ']' 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75754 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75754 ']' 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75754 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75754 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:22.933 killing process with pid 75754 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75754' 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75754 00:17:22.933 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75754 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:23.191 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:17:23.450 00:17:23.450 real 0m9.743s 00:17:23.450 user 0m18.024s 00:17:23.450 sys 0m2.032s 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.450 ************************************ 00:17:23.450 END TEST nvmf_host_discovery 00:17:23.450 ************************************ 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.450 09:54:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.710 ************************************ 00:17:23.710 START TEST nvmf_host_multipath_status 00:17:23.710 ************************************ 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:23.710 * Looking for test storage... 00:17:23.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.710 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:23.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.711 --rc genhtml_branch_coverage=1 00:17:23.711 --rc genhtml_function_coverage=1 00:17:23.711 --rc genhtml_legend=1 00:17:23.711 --rc geninfo_all_blocks=1 00:17:23.711 --rc geninfo_unexecuted_blocks=1 00:17:23.711 00:17:23.711 ' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:23.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.711 --rc genhtml_branch_coverage=1 00:17:23.711 --rc genhtml_function_coverage=1 00:17:23.711 --rc genhtml_legend=1 00:17:23.711 --rc geninfo_all_blocks=1 00:17:23.711 --rc geninfo_unexecuted_blocks=1 00:17:23.711 00:17:23.711 ' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:23.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.711 --rc genhtml_branch_coverage=1 00:17:23.711 --rc genhtml_function_coverage=1 00:17:23.711 --rc genhtml_legend=1 00:17:23.711 --rc geninfo_all_blocks=1 00:17:23.711 --rc geninfo_unexecuted_blocks=1 00:17:23.711 00:17:23.711 ' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:23.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.711 --rc genhtml_branch_coverage=1 00:17:23.711 --rc genhtml_function_coverage=1 00:17:23.711 --rc genhtml_legend=1 00:17:23.711 --rc geninfo_all_blocks=1 00:17:23.711 --rc geninfo_unexecuted_blocks=1 00:17:23.711 00:17:23.711 ' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.711 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:23.711 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:23.712 Cannot find device "nvmf_init_br" 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:23.712 Cannot find device "nvmf_init_br2" 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:23.712 Cannot find device "nvmf_tgt_br" 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.712 Cannot find device "nvmf_tgt_br2" 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:17:23.712 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:23.971 Cannot find device "nvmf_init_br" 00:17:23.971 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:17:23.971 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:23.971 Cannot find device "nvmf_init_br2" 00:17:23.971 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:17:23.971 09:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:23.971 Cannot find device "nvmf_tgt_br" 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:23.971 Cannot find device "nvmf_tgt_br2" 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:23.971 Cannot find device "nvmf_br" 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:23.971 Cannot find device "nvmf_init_if" 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:23.971 Cannot find device "nvmf_init_if2" 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:23.971 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:24.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:17:24.231 00:17:24.231 --- 10.0.0.3 ping statistics --- 00:17:24.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.231 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:24.231 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:24.231 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:17:24.231 00:17:24.231 --- 10.0.0.4 ping statistics --- 00:17:24.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.231 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:24.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:24.231 00:17:24.231 --- 10.0.0.1 ping statistics --- 00:17:24.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.231 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:24.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:17:24.231 00:17:24.231 --- 10.0.0.2 ping statistics --- 00:17:24.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.231 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:24.231 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76298 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76298 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76298 ']' 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.232 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:24.232 [2024-12-06 09:54:49.424887] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:24.232 [2024-12-06 09:54:49.425502] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.491 [2024-12-06 09:54:49.574543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:24.491 [2024-12-06 09:54:49.633279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.491 [2024-12-06 09:54:49.633349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.492 [2024-12-06 09:54:49.633360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.492 [2024-12-06 09:54:49.633368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.492 [2024-12-06 09:54:49.633374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.492 [2024-12-06 09:54:49.634487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.492 [2024-12-06 09:54:49.634500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.492 [2024-12-06 09:54:49.687822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:24.492 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.492 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:24.492 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.492 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.492 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:24.751 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.751 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76298 00:17:24.751 09:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:25.011 [2024-12-06 09:54:50.107877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.011 09:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:25.271 Malloc0 00:17:25.271 09:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:25.530 09:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.790 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:26.048 [2024-12-06 09:54:51.253830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:26.048 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:26.307 [2024-12-06 09:54:51.510202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76346 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76346 /var/tmp/bdevperf.sock 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76346 ']' 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.307 09:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:27.724 09:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.724 09:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:27.724 09:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:27.724 09:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:27.984 Nvme0n1 00:17:27.984 09:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:28.550 Nvme0n1 00:17:28.550 09:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:28.550 09:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:30.479 09:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:30.479 09:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:30.737 09:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:31.303 09:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:32.241 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:32.241 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:32.241 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.241 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:32.500 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:32.500 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:32.500 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.500 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:32.759 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:32.759 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:32.759 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.759 09:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:33.327 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.327 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:33.327 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.327 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:33.327 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.327 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:33.584 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.584 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:33.843 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.843 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:33.843 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.843 09:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:34.101 09:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.101 09:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:34.101 09:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:34.361 09:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:34.619 09:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:35.555 09:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:35.555 09:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:35.555 09:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.555 09:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:35.814 09:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:35.814 09:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:35.814 09:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.814 09:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:36.072 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.072 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:36.072 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.072 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:36.331 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.331 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:36.331 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.331 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:36.590 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.590 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:36.590 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.590 09:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:37.157 09:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.157 09:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:37.157 09:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:37.157 09:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.416 09:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.416 09:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:37.416 09:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:37.674 09:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:37.933 09:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:38.868 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:38.868 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:38.868 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.868 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:39.127 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.127 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:39.127 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.127 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:39.722 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:39.722 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:39.722 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.722 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:39.722 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.722 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:39.981 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:39.981 09:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.239 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.239 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:40.239 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.239 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:40.497 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.497 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:40.497 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.497 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:40.755 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.755 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:40.755 09:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:41.013 09:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:41.272 09:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:42.652 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:42.652 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:42.652 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:42.652 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.652 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:42.652 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:42.652 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:42.652 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.911 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:42.911 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:42.911 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.911 09:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:43.168 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.168 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:43.168 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:43.168 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.427 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.427 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:43.427 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.427 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:43.685 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.685 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:43.685 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.685 09:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:44.253 09:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:44.253 09:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:44.253 09:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:44.511 09:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:44.770 09:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:45.706 09:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:45.706 09:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:45.706 09:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.706 09:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:45.965 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:45.965 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:45.965 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.965 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:46.224 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:46.224 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:46.224 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:46.224 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.791 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.791 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:46.791 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.791 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:46.791 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.791 09:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:46.791 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.791 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:47.050 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:47.050 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:47.050 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.050 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:47.308 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:47.308 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:47.308 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:47.566 09:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:48.134 09:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:49.069 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:49.069 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:49.070 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.070 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:49.328 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:49.328 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:49.328 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.328 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:49.586 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.586 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:49.586 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:49.586 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.845 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.845 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:49.845 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.845 09:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:50.103 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.104 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:50.104 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.104 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:50.362 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:50.362 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:50.362 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:50.362 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.621 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.621 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:50.879 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:50.879 09:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:51.137 09:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:51.395 09:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:52.333 09:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:52.333 09:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:52.333 09:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.333 09:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:52.591 09:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.591 09:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:52.591 09:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.591 09:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:52.850 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.850 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:52.850 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.850 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:53.108 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.108 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:53.108 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.108 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:53.367 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.367 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:53.367 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.367 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:53.626 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.626 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:53.626 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.626 09:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:53.885 09:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.885 09:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:53.885 09:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:54.144 09:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:54.402 09:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:55.780 09:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:55.780 09:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:55.780 09:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.780 09:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:55.780 09:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:55.780 09:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:55.780 09:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.780 09:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:56.039 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.039 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:56.039 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.039 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:56.297 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.297 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:56.297 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:56.297 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.557 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.557 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:56.557 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.557 09:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:57.127 09:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.127 09:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:57.127 09:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.127 09:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:57.385 09:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.385 09:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:57.385 09:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:57.644 09:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:57.902 09:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:59.278 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:59.278 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:59.278 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.278 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:59.278 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:59.278 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:59.278 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.278 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:59.537 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:59.537 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:59.537 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.537 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:59.796 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:59.796 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:59.796 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.796 09:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:00.056 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.056 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:00.056 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.056 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:00.314 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.314 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:00.314 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:00.314 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.572 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.572 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:00.572 09:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:01.140 09:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:01.399 09:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:02.334 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:02.334 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:02.334 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.334 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:02.594 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.594 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:02.594 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.594 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:02.853 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:02.853 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:02.853 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.853 09:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:03.112 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.112 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:03.112 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.112 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:03.372 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.372 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:03.372 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.372 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:03.631 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.631 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:03.631 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.889 09:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:04.148 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76346 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76346 ']' 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76346 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76346 00:18:04.149 killing process with pid 76346 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76346' 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76346 00:18:04.149 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76346 00:18:04.149 { 00:18:04.149 "results": [ 00:18:04.149 { 00:18:04.149 "job": "Nvme0n1", 00:18:04.149 "core_mask": "0x4", 00:18:04.149 "workload": "verify", 00:18:04.149 "status": "terminated", 00:18:04.149 "verify_range": { 00:18:04.149 "start": 0, 00:18:04.149 "length": 16384 00:18:04.149 }, 00:18:04.149 "queue_depth": 128, 00:18:04.149 "io_size": 4096, 00:18:04.149 "runtime": 35.563438, 00:18:04.149 "iops": 7932.332076555703, 00:18:04.149 "mibps": 30.985672174045714, 00:18:04.149 "io_failed": 0, 00:18:04.149 "io_timeout": 0, 00:18:04.149 "avg_latency_us": 16105.180486241065, 00:18:04.149 "min_latency_us": 398.4290909090909, 00:18:04.149 "max_latency_us": 4026531.84 00:18:04.149 } 00:18:04.149 ], 00:18:04.149 "core_count": 1 00:18:04.149 } 00:18:04.410 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76346 00:18:04.410 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:04.410 [2024-12-06 09:54:51.588606] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:04.410 [2024-12-06 09:54:51.588742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76346 ] 00:18:04.410 [2024-12-06 09:54:51.728357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.410 [2024-12-06 09:54:51.781458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.410 [2024-12-06 09:54:51.858110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:04.410 Running I/O for 90 seconds... 00:18:04.410 6774.00 IOPS, 26.46 MiB/s [2024-12-06T09:55:29.682Z] 7563.00 IOPS, 29.54 MiB/s [2024-12-06T09:55:29.682Z] 7735.33 IOPS, 30.22 MiB/s [2024-12-06T09:55:29.682Z] 7890.00 IOPS, 30.82 MiB/s [2024-12-06T09:55:29.682Z] 8033.20 IOPS, 31.38 MiB/s [2024-12-06T09:55:29.682Z] 8272.83 IOPS, 32.32 MiB/s [2024-12-06T09:55:29.683Z] 8333.29 IOPS, 32.55 MiB/s [2024-12-06T09:55:29.683Z] 8400.62 IOPS, 32.81 MiB/s [2024-12-06T09:55:29.683Z] 8483.11 IOPS, 33.14 MiB/s [2024-12-06T09:55:29.683Z] 8509.30 IOPS, 33.24 MiB/s [2024-12-06T09:55:29.683Z] 8544.45 IOPS, 33.38 MiB/s [2024-12-06T09:55:29.683Z] 8575.08 IOPS, 33.50 MiB/s [2024-12-06T09:55:29.683Z] 8617.00 IOPS, 33.66 MiB/s [2024-12-06T09:55:29.683Z] 8680.36 IOPS, 33.91 MiB/s [2024-12-06T09:55:29.683Z] 8674.73 IOPS, 33.89 MiB/s [2024-12-06T09:55:29.683Z] [2024-12-06 09:55:09.541480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.541562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.541659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.541681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.541719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.541734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.541756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.541770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.541790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.541819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.541839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.541853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.541872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.541887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.541906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.541920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.541939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.541954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.542585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.542618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.542700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.542753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.542789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.542842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.542895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.411 [2024-12-06 09:55:09.542932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:04.411 [2024-12-06 09:55:09.542954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.411 [2024-12-06 09:55:09.542970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.543951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.543985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.544014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.412 [2024-12-06 09:55:09.544048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.412 [2024-12-06 09:55:09.544583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.412 [2024-12-06 09:55:09.544603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.544634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.544977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.544994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.545046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.545124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.545160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.545196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.545231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.545266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.545302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.545976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.545991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.546011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.413 [2024-12-06 09:55:09.546025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.546060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.546075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.546094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.546108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.546128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.546149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.546171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.546186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.546206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.546227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:04.413 [2024-12-06 09:55:09.546247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.413 [2024-12-06 09:55:09.546262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.546282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.414 [2024-12-06 09:55:09.546296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.414 [2024-12-06 09:55:09.547233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.547960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.547986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.548017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.548046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.548061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:09.548110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:09.548125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:04.414 8610.31 IOPS, 33.63 MiB/s [2024-12-06T09:55:29.686Z] 8103.82 IOPS, 31.66 MiB/s [2024-12-06T09:55:29.686Z] 7653.61 IOPS, 29.90 MiB/s [2024-12-06T09:55:29.686Z] 7250.79 IOPS, 28.32 MiB/s [2024-12-06T09:55:29.686Z] 6948.60 IOPS, 27.14 MiB/s [2024-12-06T09:55:29.686Z] 7103.81 IOPS, 27.75 MiB/s [2024-12-06T09:55:29.686Z] 7194.73 IOPS, 28.10 MiB/s [2024-12-06T09:55:29.686Z] 7255.35 IOPS, 28.34 MiB/s [2024-12-06T09:55:29.686Z] 7335.25 IOPS, 28.65 MiB/s [2024-12-06T09:55:29.686Z] 7419.40 IOPS, 28.98 MiB/s [2024-12-06T09:55:29.686Z] 7529.88 IOPS, 29.41 MiB/s [2024-12-06T09:55:29.686Z] 7630.56 IOPS, 29.81 MiB/s [2024-12-06T09:55:29.686Z] 7690.04 IOPS, 30.04 MiB/s [2024-12-06T09:55:29.686Z] 7719.76 IOPS, 30.16 MiB/s [2024-12-06T09:55:29.686Z] 7764.83 IOPS, 30.33 MiB/s [2024-12-06T09:55:29.686Z] 7828.16 IOPS, 30.58 MiB/s [2024-12-06T09:55:29.686Z] 7888.50 IOPS, 30.81 MiB/s [2024-12-06T09:55:29.686Z] [2024-12-06 09:55:26.397059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.414 [2024-12-06 09:55:26.397365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.414 [2024-12-06 09:55:26.397818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.414 [2024-12-06 09:55:26.397858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.414 [2024-12-06 09:55:26.397944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.397971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.414 [2024-12-06 09:55:26.397989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.399406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.414 [2024-12-06 09:55:26.399449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:04.414 [2024-12-06 09:55:26.399512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.414 [2024-12-06 09:55:26.399542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:04.415 [2024-12-06 09:55:26.399566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.415 [2024-12-06 09:55:26.399583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:04.415 [2024-12-06 09:55:26.399606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.415 [2024-12-06 09:55:26.399628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:04.415 [2024-12-06 09:55:26.399665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.415 [2024-12-06 09:55:26.399686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:04.415 [2024-12-06 09:55:26.399710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.415 [2024-12-06 09:55:26.399728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:04.415 7913.97 IOPS, 30.91 MiB/s [2024-12-06T09:55:29.687Z] 7924.38 IOPS, 30.95 MiB/s [2024-12-06T09:55:29.687Z] 7929.86 IOPS, 30.98 MiB/s [2024-12-06T09:55:29.687Z] Received shutdown signal, test time was about 35.564245 seconds 00:18:04.415 00:18:04.415 Latency(us) 00:18:04.415 [2024-12-06T09:55:29.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.415 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:04.415 Verification LBA range: start 0x0 length 0x4000 00:18:04.415 Nvme0n1 : 35.56 7932.33 30.99 0.00 0.00 16105.18 398.43 4026531.84 00:18:04.415 [2024-12-06T09:55:29.687Z] =================================================================================================================== 00:18:04.415 [2024-12-06T09:55:29.687Z] Total : 7932.33 30.99 0.00 0.00 16105.18 398.43 4026531.84 00:18:04.415 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:04.674 rmmod nvme_tcp 00:18:04.674 rmmod nvme_fabrics 00:18:04.674 rmmod nvme_keyring 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76298 ']' 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76298 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76298 ']' 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76298 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76298 00:18:04.674 killing process with pid 76298 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76298' 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76298 00:18:04.674 09:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76298 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:04.934 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.193 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.194 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:18:05.194 00:18:05.194 real 0m41.712s 00:18:05.194 user 2m14.928s 00:18:05.194 sys 0m12.643s 00:18:05.194 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.194 09:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:05.194 ************************************ 00:18:05.194 END TEST nvmf_host_multipath_status 00:18:05.194 ************************************ 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.453 ************************************ 00:18:05.453 START TEST nvmf_discovery_remove_ifc 00:18:05.453 ************************************ 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:05.453 * Looking for test storage... 00:18:05.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.453 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:05.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.454 --rc genhtml_branch_coverage=1 00:18:05.454 --rc genhtml_function_coverage=1 00:18:05.454 --rc genhtml_legend=1 00:18:05.454 --rc geninfo_all_blocks=1 00:18:05.454 --rc geninfo_unexecuted_blocks=1 00:18:05.454 00:18:05.454 ' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:05.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.454 --rc genhtml_branch_coverage=1 00:18:05.454 --rc genhtml_function_coverage=1 00:18:05.454 --rc genhtml_legend=1 00:18:05.454 --rc geninfo_all_blocks=1 00:18:05.454 --rc geninfo_unexecuted_blocks=1 00:18:05.454 00:18:05.454 ' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:05.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.454 --rc genhtml_branch_coverage=1 00:18:05.454 --rc genhtml_function_coverage=1 00:18:05.454 --rc genhtml_legend=1 00:18:05.454 --rc geninfo_all_blocks=1 00:18:05.454 --rc geninfo_unexecuted_blocks=1 00:18:05.454 00:18:05.454 ' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:05.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.454 --rc genhtml_branch_coverage=1 00:18:05.454 --rc genhtml_function_coverage=1 00:18:05.454 --rc genhtml_legend=1 00:18:05.454 --rc geninfo_all_blocks=1 00:18:05.454 --rc geninfo_unexecuted_blocks=1 00:18:05.454 00:18:05.454 ' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.454 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.454 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:05.713 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:05.714 Cannot find device "nvmf_init_br" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:05.714 Cannot find device "nvmf_init_br2" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:05.714 Cannot find device "nvmf_tgt_br" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.714 Cannot find device "nvmf_tgt_br2" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:05.714 Cannot find device "nvmf_init_br" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:05.714 Cannot find device "nvmf_init_br2" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:05.714 Cannot find device "nvmf_tgt_br" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:05.714 Cannot find device "nvmf_tgt_br2" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:05.714 Cannot find device "nvmf_br" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:05.714 Cannot find device "nvmf_init_if" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:05.714 Cannot find device "nvmf_init_if2" 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:05.714 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:05.972 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:05.972 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:05.972 09:55:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:05.972 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:05.972 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:05.972 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:05.972 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:05.972 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:05.972 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:05.972 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:05.973 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:05.973 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:05.973 00:18:05.973 --- 10.0.0.3 ping statistics --- 00:18:05.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.973 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:05.973 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:05.973 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:18:05.973 00:18:05.973 --- 10.0.0.4 ping statistics --- 00:18:05.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.973 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:05.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:05.973 00:18:05.973 --- 10.0.0.1 ping statistics --- 00:18:05.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.973 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:05.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:18:05.973 00:18:05.973 --- 10.0.0.2 ping statistics --- 00:18:05.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.973 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77211 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77211 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77211 ']' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.973 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:05.973 [2024-12-06 09:55:31.194969] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:05.973 [2024-12-06 09:55:31.195058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.232 [2024-12-06 09:55:31.346702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.232 [2024-12-06 09:55:31.402784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.232 [2024-12-06 09:55:31.402852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.232 [2024-12-06 09:55:31.402874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.232 [2024-12-06 09:55:31.402885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.232 [2024-12-06 09:55:31.402895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.232 [2024-12-06 09:55:31.403372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.232 [2024-12-06 09:55:31.459784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:06.490 [2024-12-06 09:55:31.587080] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.490 [2024-12-06 09:55:31.595213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:06.490 null0 00:18:06.490 [2024-12-06 09:55:31.627067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77235 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:06.490 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77235 /tmp/host.sock 00:18:06.491 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77235 ']' 00:18:06.491 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:06.491 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.491 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:06.491 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:06.491 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.491 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:06.491 [2024-12-06 09:55:31.711808] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:06.491 [2024-12-06 09:55:31.711914] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77235 ] 00:18:06.749 [2024-12-06 09:55:31.866224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.749 [2024-12-06 09:55:31.923147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.749 09:55:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.009 [2024-12-06 09:55:32.039171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.009 09:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.009 09:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:07.009 09:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.009 09:55:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:08.026 [2024-12-06 09:55:33.100240] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:08.026 [2024-12-06 09:55:33.100276] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:08.026 [2024-12-06 09:55:33.100303] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:08.026 [2024-12-06 09:55:33.106290] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:08.026 [2024-12-06 09:55:33.160732] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:08.026 [2024-12-06 09:55:33.161808] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d42f00:1 started. 00:18:08.026 [2024-12-06 09:55:33.163773] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:08.026 [2024-12-06 09:55:33.163854] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:08.026 [2024-12-06 09:55:33.163884] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:08.026 [2024-12-06 09:55:33.163901] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:08.026 [2024-12-06 09:55:33.163927] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:08.026 [2024-12-06 09:55:33.168937] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d42f00 was disconnected and freed. delete nvme_qpair. 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:08.026 09:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:09.405 09:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:10.343 09:55:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:11.278 09:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:12.216 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:12.216 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.216 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:12.216 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.216 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:12.216 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:12.216 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:12.474 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.474 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:12.474 09:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:13.409 09:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:13.409 [2024-12-06 09:55:38.591224] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:13.409 [2024-12-06 09:55:38.591302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.409 [2024-12-06 09:55:38.591320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.409 [2024-12-06 09:55:38.591333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.409 [2024-12-06 09:55:38.591342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.409 [2024-12-06 09:55:38.591352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.409 [2024-12-06 09:55:38.591361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.409 [2024-12-06 09:55:38.591371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.409 [2024-12-06 09:55:38.591379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.409 [2024-12-06 09:55:38.591389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.409 [2024-12-06 09:55:38.591398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.409 [2024-12-06 09:55:38.591408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1efc0 is same with the state(6) to be set 00:18:13.409 [2024-12-06 09:55:38.601218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1efc0 (9): Bad file descriptor 00:18:13.409 [2024-12-06 09:55:38.611236] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:13.409 [2024-12-06 09:55:38.611263] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:13.409 [2024-12-06 09:55:38.611270] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:13.409 [2024-12-06 09:55:38.611276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:13.409 [2024-12-06 09:55:38.611325] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:14.360 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:14.360 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.360 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.360 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:14.360 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:14.360 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:14.360 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:14.620 [2024-12-06 09:55:39.633730] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:18:14.620 [2024-12-06 09:55:39.633864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1efc0 with addr=10.0.0.3, port=4420 00:18:14.620 [2024-12-06 09:55:39.633910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1efc0 is same with the state(6) to be set 00:18:14.620 [2024-12-06 09:55:39.633977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1efc0 (9): Bad file descriptor 00:18:14.620 [2024-12-06 09:55:39.634939] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:18:14.620 [2024-12-06 09:55:39.635038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:14.620 [2024-12-06 09:55:39.635063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:14.620 [2024-12-06 09:55:39.635093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:14.620 [2024-12-06 09:55:39.635114] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:14.621 [2024-12-06 09:55:39.635140] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:14.621 [2024-12-06 09:55:39.635161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:14.621 [2024-12-06 09:55:39.635222] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:14.621 [2024-12-06 09:55:39.635236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:14.621 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.621 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:14.621 09:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:15.558 [2024-12-06 09:55:40.635317] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:15.558 [2024-12-06 09:55:40.635367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:15.558 [2024-12-06 09:55:40.635389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:15.558 [2024-12-06 09:55:40.635414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:15.558 [2024-12-06 09:55:40.635424] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:18:15.558 [2024-12-06 09:55:40.635433] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:15.558 [2024-12-06 09:55:40.635440] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:15.558 [2024-12-06 09:55:40.635445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:15.558 [2024-12-06 09:55:40.635476] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:18:15.558 [2024-12-06 09:55:40.635528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.558 [2024-12-06 09:55:40.635543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.558 [2024-12-06 09:55:40.635556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.558 [2024-12-06 09:55:40.635581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.558 [2024-12-06 09:55:40.635621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.558 [2024-12-06 09:55:40.635630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.558 [2024-12-06 09:55:40.635651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.558 [2024-12-06 09:55:40.635661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.558 [2024-12-06 09:55:40.635671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.558 [2024-12-06 09:55:40.635680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.558 [2024-12-06 09:55:40.635689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:18:15.558 [2024-12-06 09:55:40.635880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caaa20 (9): Bad file descriptor 00:18:15.558 [2024-12-06 09:55:40.636895] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:15.558 [2024-12-06 09:55:40.636934] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:15.558 09:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:16.935 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:16.935 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:16.935 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:16.935 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.936 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.936 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.936 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:16.936 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.936 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:16.936 09:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:17.503 [2024-12-06 09:55:42.644801] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:17.503 [2024-12-06 09:55:42.644840] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:17.503 [2024-12-06 09:55:42.644859] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:17.503 [2024-12-06 09:55:42.650891] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:18:17.503 [2024-12-06 09:55:42.705240] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:18:17.503 [2024-12-06 09:55:42.706073] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1d4b1d0:1 started. 00:18:17.503 [2024-12-06 09:55:42.707482] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:17.503 [2024-12-06 09:55:42.707528] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:17.503 [2024-12-06 09:55:42.707553] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:17.503 [2024-12-06 09:55:42.707580] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:18:17.503 [2024-12-06 09:55:42.707606] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:17.503 [2024-12-06 09:55:42.713439] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1d4b1d0 was disconnected and freed. delete nvme_qpair. 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77235 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77235 ']' 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77235 00:18:17.762 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:17.763 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.763 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77235 00:18:17.763 killing process with pid 77235 00:18:17.763 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.763 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.763 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77235' 00:18:17.763 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77235 00:18:17.763 09:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77235 00:18:18.021 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:18.021 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:18.021 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:18:18.021 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.021 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.022 rmmod nvme_tcp 00:18:18.022 rmmod nvme_fabrics 00:18:18.022 rmmod nvme_keyring 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77211 ']' 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77211 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77211 ']' 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77211 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.022 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77211 00:18:18.281 killing process with pid 77211 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77211' 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77211 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77211 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:18.281 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:18:18.541 00:18:18.541 real 0m13.218s 00:18:18.541 user 0m22.333s 00:18:18.541 sys 0m2.572s 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.541 ************************************ 00:18:18.541 END TEST nvmf_discovery_remove_ifc 00:18:18.541 ************************************ 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.541 ************************************ 00:18:18.541 START TEST nvmf_identify_kernel_target 00:18:18.541 ************************************ 00:18:18.541 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:18.801 * Looking for test storage... 00:18:18.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:18.801 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:18.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.802 --rc genhtml_branch_coverage=1 00:18:18.802 --rc genhtml_function_coverage=1 00:18:18.802 --rc genhtml_legend=1 00:18:18.802 --rc geninfo_all_blocks=1 00:18:18.802 --rc geninfo_unexecuted_blocks=1 00:18:18.802 00:18:18.802 ' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:18.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.802 --rc genhtml_branch_coverage=1 00:18:18.802 --rc genhtml_function_coverage=1 00:18:18.802 --rc genhtml_legend=1 00:18:18.802 --rc geninfo_all_blocks=1 00:18:18.802 --rc geninfo_unexecuted_blocks=1 00:18:18.802 00:18:18.802 ' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:18.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.802 --rc genhtml_branch_coverage=1 00:18:18.802 --rc genhtml_function_coverage=1 00:18:18.802 --rc genhtml_legend=1 00:18:18.802 --rc geninfo_all_blocks=1 00:18:18.802 --rc geninfo_unexecuted_blocks=1 00:18:18.802 00:18:18.802 ' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:18.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.802 --rc genhtml_branch_coverage=1 00:18:18.802 --rc genhtml_function_coverage=1 00:18:18.802 --rc genhtml_legend=1 00:18:18.802 --rc geninfo_all_blocks=1 00:18:18.802 --rc geninfo_unexecuted_blocks=1 00:18:18.802 00:18:18.802 ' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.802 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.802 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.803 09:55:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:18.803 Cannot find device "nvmf_init_br" 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:18.803 Cannot find device "nvmf_init_br2" 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:18.803 Cannot find device "nvmf_tgt_br" 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.803 Cannot find device "nvmf_tgt_br2" 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:18.803 Cannot find device "nvmf_init_br" 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:18.803 Cannot find device "nvmf_init_br2" 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:18:18.803 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:19.062 Cannot find device "nvmf_tgt_br" 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:19.062 Cannot find device "nvmf_tgt_br2" 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:19.062 Cannot find device "nvmf_br" 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:19.062 Cannot find device "nvmf_init_if" 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:19.062 Cannot find device "nvmf_init_if2" 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.062 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.063 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:19.322 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:19.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:19.322 00:18:19.322 --- 10.0.0.3 ping statistics --- 00:18:19.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.322 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:19.323 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:19.323 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.106 ms 00:18:19.323 00:18:19.323 --- 10.0.0.4 ping statistics --- 00:18:19.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.323 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:19.323 00:18:19.323 --- 10.0.0.1 ping statistics --- 00:18:19.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.323 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:19.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:18:19.323 00:18:19.323 --- 10.0.0.2 ping statistics --- 00:18:19.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.323 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:19.323 09:55:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:19.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.842 Waiting for block devices as requested 00:18:19.842 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.842 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:19.842 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:20.101 No valid GPT data, bailing 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:20.101 No valid GPT data, bailing 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:20.101 No valid GPT data, bailing 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:20.101 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:20.101 No valid GPT data, bailing 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -a 10.0.0.1 -t tcp -s 4420 00:18:20.360 00:18:20.360 Discovery Log Number of Records 2, Generation counter 2 00:18:20.360 =====Discovery Log Entry 0====== 00:18:20.360 trtype: tcp 00:18:20.360 adrfam: ipv4 00:18:20.360 subtype: current discovery subsystem 00:18:20.360 treq: not specified, sq flow control disable supported 00:18:20.360 portid: 1 00:18:20.360 trsvcid: 4420 00:18:20.360 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:20.360 traddr: 10.0.0.1 00:18:20.360 eflags: none 00:18:20.360 sectype: none 00:18:20.360 =====Discovery Log Entry 1====== 00:18:20.360 trtype: tcp 00:18:20.360 adrfam: ipv4 00:18:20.360 subtype: nvme subsystem 00:18:20.360 treq: not specified, sq flow control disable supported 00:18:20.360 portid: 1 00:18:20.360 trsvcid: 4420 00:18:20.360 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:20.360 traddr: 10.0.0.1 00:18:20.360 eflags: none 00:18:20.360 sectype: none 00:18:20.360 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:20.360 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:20.360 ===================================================== 00:18:20.360 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:20.360 ===================================================== 00:18:20.360 Controller Capabilities/Features 00:18:20.360 ================================ 00:18:20.360 Vendor ID: 0000 00:18:20.360 Subsystem Vendor ID: 0000 00:18:20.360 Serial Number: 3ffd9526057db77d6356 00:18:20.360 Model Number: Linux 00:18:20.360 Firmware Version: 6.8.9-20 00:18:20.360 Recommended Arb Burst: 0 00:18:20.360 IEEE OUI Identifier: 00 00 00 00:18:20.360 Multi-path I/O 00:18:20.360 May have multiple subsystem ports: No 00:18:20.360 May have multiple controllers: No 00:18:20.360 Associated with SR-IOV VF: No 00:18:20.360 Max Data Transfer Size: Unlimited 00:18:20.360 Max Number of Namespaces: 0 00:18:20.360 Max Number of I/O Queues: 1024 00:18:20.360 NVMe Specification Version (VS): 1.3 00:18:20.360 NVMe Specification Version (Identify): 1.3 00:18:20.360 Maximum Queue Entries: 1024 00:18:20.360 Contiguous Queues Required: No 00:18:20.360 Arbitration Mechanisms Supported 00:18:20.360 Weighted Round Robin: Not Supported 00:18:20.360 Vendor Specific: Not Supported 00:18:20.360 Reset Timeout: 7500 ms 00:18:20.360 Doorbell Stride: 4 bytes 00:18:20.360 NVM Subsystem Reset: Not Supported 00:18:20.360 Command Sets Supported 00:18:20.360 NVM Command Set: Supported 00:18:20.360 Boot Partition: Not Supported 00:18:20.360 Memory Page Size Minimum: 4096 bytes 00:18:20.360 Memory Page Size Maximum: 4096 bytes 00:18:20.360 Persistent Memory Region: Not Supported 00:18:20.360 Optional Asynchronous Events Supported 00:18:20.360 Namespace Attribute Notices: Not Supported 00:18:20.360 Firmware Activation Notices: Not Supported 00:18:20.360 ANA Change Notices: Not Supported 00:18:20.360 PLE Aggregate Log Change Notices: Not Supported 00:18:20.361 LBA Status Info Alert Notices: Not Supported 00:18:20.361 EGE Aggregate Log Change Notices: Not Supported 00:18:20.361 Normal NVM Subsystem Shutdown event: Not Supported 00:18:20.361 Zone Descriptor Change Notices: Not Supported 00:18:20.361 Discovery Log Change Notices: Supported 00:18:20.361 Controller Attributes 00:18:20.361 128-bit Host Identifier: Not Supported 00:18:20.361 Non-Operational Permissive Mode: Not Supported 00:18:20.361 NVM Sets: Not Supported 00:18:20.361 Read Recovery Levels: Not Supported 00:18:20.361 Endurance Groups: Not Supported 00:18:20.361 Predictable Latency Mode: Not Supported 00:18:20.361 Traffic Based Keep ALive: Not Supported 00:18:20.361 Namespace Granularity: Not Supported 00:18:20.361 SQ Associations: Not Supported 00:18:20.361 UUID List: Not Supported 00:18:20.361 Multi-Domain Subsystem: Not Supported 00:18:20.361 Fixed Capacity Management: Not Supported 00:18:20.361 Variable Capacity Management: Not Supported 00:18:20.361 Delete Endurance Group: Not Supported 00:18:20.361 Delete NVM Set: Not Supported 00:18:20.361 Extended LBA Formats Supported: Not Supported 00:18:20.361 Flexible Data Placement Supported: Not Supported 00:18:20.361 00:18:20.361 Controller Memory Buffer Support 00:18:20.361 ================================ 00:18:20.361 Supported: No 00:18:20.361 00:18:20.361 Persistent Memory Region Support 00:18:20.361 ================================ 00:18:20.361 Supported: No 00:18:20.361 00:18:20.361 Admin Command Set Attributes 00:18:20.361 ============================ 00:18:20.361 Security Send/Receive: Not Supported 00:18:20.361 Format NVM: Not Supported 00:18:20.361 Firmware Activate/Download: Not Supported 00:18:20.361 Namespace Management: Not Supported 00:18:20.361 Device Self-Test: Not Supported 00:18:20.361 Directives: Not Supported 00:18:20.361 NVMe-MI: Not Supported 00:18:20.361 Virtualization Management: Not Supported 00:18:20.361 Doorbell Buffer Config: Not Supported 00:18:20.361 Get LBA Status Capability: Not Supported 00:18:20.361 Command & Feature Lockdown Capability: Not Supported 00:18:20.361 Abort Command Limit: 1 00:18:20.361 Async Event Request Limit: 1 00:18:20.361 Number of Firmware Slots: N/A 00:18:20.361 Firmware Slot 1 Read-Only: N/A 00:18:20.361 Firmware Activation Without Reset: N/A 00:18:20.361 Multiple Update Detection Support: N/A 00:18:20.361 Firmware Update Granularity: No Information Provided 00:18:20.361 Per-Namespace SMART Log: No 00:18:20.361 Asymmetric Namespace Access Log Page: Not Supported 00:18:20.361 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:20.361 Command Effects Log Page: Not Supported 00:18:20.361 Get Log Page Extended Data: Supported 00:18:20.361 Telemetry Log Pages: Not Supported 00:18:20.361 Persistent Event Log Pages: Not Supported 00:18:20.361 Supported Log Pages Log Page: May Support 00:18:20.361 Commands Supported & Effects Log Page: Not Supported 00:18:20.361 Feature Identifiers & Effects Log Page:May Support 00:18:20.361 NVMe-MI Commands & Effects Log Page: May Support 00:18:20.361 Data Area 4 for Telemetry Log: Not Supported 00:18:20.361 Error Log Page Entries Supported: 1 00:18:20.361 Keep Alive: Not Supported 00:18:20.361 00:18:20.361 NVM Command Set Attributes 00:18:20.361 ========================== 00:18:20.361 Submission Queue Entry Size 00:18:20.361 Max: 1 00:18:20.361 Min: 1 00:18:20.361 Completion Queue Entry Size 00:18:20.361 Max: 1 00:18:20.361 Min: 1 00:18:20.361 Number of Namespaces: 0 00:18:20.361 Compare Command: Not Supported 00:18:20.361 Write Uncorrectable Command: Not Supported 00:18:20.361 Dataset Management Command: Not Supported 00:18:20.361 Write Zeroes Command: Not Supported 00:18:20.361 Set Features Save Field: Not Supported 00:18:20.361 Reservations: Not Supported 00:18:20.361 Timestamp: Not Supported 00:18:20.361 Copy: Not Supported 00:18:20.361 Volatile Write Cache: Not Present 00:18:20.361 Atomic Write Unit (Normal): 1 00:18:20.361 Atomic Write Unit (PFail): 1 00:18:20.361 Atomic Compare & Write Unit: 1 00:18:20.361 Fused Compare & Write: Not Supported 00:18:20.361 Scatter-Gather List 00:18:20.361 SGL Command Set: Supported 00:18:20.361 SGL Keyed: Not Supported 00:18:20.361 SGL Bit Bucket Descriptor: Not Supported 00:18:20.361 SGL Metadata Pointer: Not Supported 00:18:20.361 Oversized SGL: Not Supported 00:18:20.361 SGL Metadata Address: Not Supported 00:18:20.361 SGL Offset: Supported 00:18:20.361 Transport SGL Data Block: Not Supported 00:18:20.361 Replay Protected Memory Block: Not Supported 00:18:20.361 00:18:20.361 Firmware Slot Information 00:18:20.361 ========================= 00:18:20.361 Active slot: 0 00:18:20.361 00:18:20.361 00:18:20.361 Error Log 00:18:20.361 ========= 00:18:20.361 00:18:20.361 Active Namespaces 00:18:20.361 ================= 00:18:20.361 Discovery Log Page 00:18:20.361 ================== 00:18:20.361 Generation Counter: 2 00:18:20.361 Number of Records: 2 00:18:20.361 Record Format: 0 00:18:20.361 00:18:20.361 Discovery Log Entry 0 00:18:20.361 ---------------------- 00:18:20.361 Transport Type: 3 (TCP) 00:18:20.361 Address Family: 1 (IPv4) 00:18:20.361 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:20.361 Entry Flags: 00:18:20.361 Duplicate Returned Information: 0 00:18:20.361 Explicit Persistent Connection Support for Discovery: 0 00:18:20.361 Transport Requirements: 00:18:20.361 Secure Channel: Not Specified 00:18:20.361 Port ID: 1 (0x0001) 00:18:20.361 Controller ID: 65535 (0xffff) 00:18:20.361 Admin Max SQ Size: 32 00:18:20.361 Transport Service Identifier: 4420 00:18:20.361 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:20.361 Transport Address: 10.0.0.1 00:18:20.361 Discovery Log Entry 1 00:18:20.361 ---------------------- 00:18:20.361 Transport Type: 3 (TCP) 00:18:20.361 Address Family: 1 (IPv4) 00:18:20.361 Subsystem Type: 2 (NVM Subsystem) 00:18:20.361 Entry Flags: 00:18:20.361 Duplicate Returned Information: 0 00:18:20.361 Explicit Persistent Connection Support for Discovery: 0 00:18:20.361 Transport Requirements: 00:18:20.361 Secure Channel: Not Specified 00:18:20.361 Port ID: 1 (0x0001) 00:18:20.361 Controller ID: 65535 (0xffff) 00:18:20.361 Admin Max SQ Size: 32 00:18:20.361 Transport Service Identifier: 4420 00:18:20.361 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:20.361 Transport Address: 10.0.0.1 00:18:20.361 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:20.620 get_feature(0x01) failed 00:18:20.620 get_feature(0x02) failed 00:18:20.620 get_feature(0x04) failed 00:18:20.620 ===================================================== 00:18:20.620 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:20.620 ===================================================== 00:18:20.620 Controller Capabilities/Features 00:18:20.620 ================================ 00:18:20.620 Vendor ID: 0000 00:18:20.620 Subsystem Vendor ID: 0000 00:18:20.620 Serial Number: 8f9adeb1a48090b679c7 00:18:20.620 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:20.620 Firmware Version: 6.8.9-20 00:18:20.620 Recommended Arb Burst: 6 00:18:20.620 IEEE OUI Identifier: 00 00 00 00:18:20.620 Multi-path I/O 00:18:20.620 May have multiple subsystem ports: Yes 00:18:20.620 May have multiple controllers: Yes 00:18:20.620 Associated with SR-IOV VF: No 00:18:20.620 Max Data Transfer Size: Unlimited 00:18:20.620 Max Number of Namespaces: 1024 00:18:20.620 Max Number of I/O Queues: 128 00:18:20.620 NVMe Specification Version (VS): 1.3 00:18:20.620 NVMe Specification Version (Identify): 1.3 00:18:20.620 Maximum Queue Entries: 1024 00:18:20.620 Contiguous Queues Required: No 00:18:20.620 Arbitration Mechanisms Supported 00:18:20.620 Weighted Round Robin: Not Supported 00:18:20.620 Vendor Specific: Not Supported 00:18:20.620 Reset Timeout: 7500 ms 00:18:20.620 Doorbell Stride: 4 bytes 00:18:20.620 NVM Subsystem Reset: Not Supported 00:18:20.620 Command Sets Supported 00:18:20.620 NVM Command Set: Supported 00:18:20.620 Boot Partition: Not Supported 00:18:20.620 Memory Page Size Minimum: 4096 bytes 00:18:20.620 Memory Page Size Maximum: 4096 bytes 00:18:20.620 Persistent Memory Region: Not Supported 00:18:20.620 Optional Asynchronous Events Supported 00:18:20.620 Namespace Attribute Notices: Supported 00:18:20.620 Firmware Activation Notices: Not Supported 00:18:20.620 ANA Change Notices: Supported 00:18:20.620 PLE Aggregate Log Change Notices: Not Supported 00:18:20.620 LBA Status Info Alert Notices: Not Supported 00:18:20.620 EGE Aggregate Log Change Notices: Not Supported 00:18:20.620 Normal NVM Subsystem Shutdown event: Not Supported 00:18:20.620 Zone Descriptor Change Notices: Not Supported 00:18:20.620 Discovery Log Change Notices: Not Supported 00:18:20.620 Controller Attributes 00:18:20.620 128-bit Host Identifier: Supported 00:18:20.620 Non-Operational Permissive Mode: Not Supported 00:18:20.620 NVM Sets: Not Supported 00:18:20.620 Read Recovery Levels: Not Supported 00:18:20.620 Endurance Groups: Not Supported 00:18:20.620 Predictable Latency Mode: Not Supported 00:18:20.620 Traffic Based Keep ALive: Supported 00:18:20.620 Namespace Granularity: Not Supported 00:18:20.620 SQ Associations: Not Supported 00:18:20.620 UUID List: Not Supported 00:18:20.620 Multi-Domain Subsystem: Not Supported 00:18:20.620 Fixed Capacity Management: Not Supported 00:18:20.620 Variable Capacity Management: Not Supported 00:18:20.620 Delete Endurance Group: Not Supported 00:18:20.620 Delete NVM Set: Not Supported 00:18:20.620 Extended LBA Formats Supported: Not Supported 00:18:20.620 Flexible Data Placement Supported: Not Supported 00:18:20.620 00:18:20.620 Controller Memory Buffer Support 00:18:20.620 ================================ 00:18:20.620 Supported: No 00:18:20.620 00:18:20.620 Persistent Memory Region Support 00:18:20.620 ================================ 00:18:20.620 Supported: No 00:18:20.620 00:18:20.620 Admin Command Set Attributes 00:18:20.620 ============================ 00:18:20.620 Security Send/Receive: Not Supported 00:18:20.620 Format NVM: Not Supported 00:18:20.620 Firmware Activate/Download: Not Supported 00:18:20.620 Namespace Management: Not Supported 00:18:20.621 Device Self-Test: Not Supported 00:18:20.621 Directives: Not Supported 00:18:20.621 NVMe-MI: Not Supported 00:18:20.621 Virtualization Management: Not Supported 00:18:20.621 Doorbell Buffer Config: Not Supported 00:18:20.621 Get LBA Status Capability: Not Supported 00:18:20.621 Command & Feature Lockdown Capability: Not Supported 00:18:20.621 Abort Command Limit: 4 00:18:20.621 Async Event Request Limit: 4 00:18:20.621 Number of Firmware Slots: N/A 00:18:20.621 Firmware Slot 1 Read-Only: N/A 00:18:20.621 Firmware Activation Without Reset: N/A 00:18:20.621 Multiple Update Detection Support: N/A 00:18:20.621 Firmware Update Granularity: No Information Provided 00:18:20.621 Per-Namespace SMART Log: Yes 00:18:20.621 Asymmetric Namespace Access Log Page: Supported 00:18:20.621 ANA Transition Time : 10 sec 00:18:20.621 00:18:20.621 Asymmetric Namespace Access Capabilities 00:18:20.621 ANA Optimized State : Supported 00:18:20.621 ANA Non-Optimized State : Supported 00:18:20.621 ANA Inaccessible State : Supported 00:18:20.621 ANA Persistent Loss State : Supported 00:18:20.621 ANA Change State : Supported 00:18:20.621 ANAGRPID is not changed : No 00:18:20.621 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:20.621 00:18:20.621 ANA Group Identifier Maximum : 128 00:18:20.621 Number of ANA Group Identifiers : 128 00:18:20.621 Max Number of Allowed Namespaces : 1024 00:18:20.621 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:20.621 Command Effects Log Page: Supported 00:18:20.621 Get Log Page Extended Data: Supported 00:18:20.621 Telemetry Log Pages: Not Supported 00:18:20.621 Persistent Event Log Pages: Not Supported 00:18:20.621 Supported Log Pages Log Page: May Support 00:18:20.621 Commands Supported & Effects Log Page: Not Supported 00:18:20.621 Feature Identifiers & Effects Log Page:May Support 00:18:20.621 NVMe-MI Commands & Effects Log Page: May Support 00:18:20.621 Data Area 4 for Telemetry Log: Not Supported 00:18:20.621 Error Log Page Entries Supported: 128 00:18:20.621 Keep Alive: Supported 00:18:20.621 Keep Alive Granularity: 1000 ms 00:18:20.621 00:18:20.621 NVM Command Set Attributes 00:18:20.621 ========================== 00:18:20.621 Submission Queue Entry Size 00:18:20.621 Max: 64 00:18:20.621 Min: 64 00:18:20.621 Completion Queue Entry Size 00:18:20.621 Max: 16 00:18:20.621 Min: 16 00:18:20.621 Number of Namespaces: 1024 00:18:20.621 Compare Command: Not Supported 00:18:20.621 Write Uncorrectable Command: Not Supported 00:18:20.621 Dataset Management Command: Supported 00:18:20.621 Write Zeroes Command: Supported 00:18:20.621 Set Features Save Field: Not Supported 00:18:20.621 Reservations: Not Supported 00:18:20.621 Timestamp: Not Supported 00:18:20.621 Copy: Not Supported 00:18:20.621 Volatile Write Cache: Present 00:18:20.621 Atomic Write Unit (Normal): 1 00:18:20.621 Atomic Write Unit (PFail): 1 00:18:20.621 Atomic Compare & Write Unit: 1 00:18:20.621 Fused Compare & Write: Not Supported 00:18:20.621 Scatter-Gather List 00:18:20.621 SGL Command Set: Supported 00:18:20.621 SGL Keyed: Not Supported 00:18:20.621 SGL Bit Bucket Descriptor: Not Supported 00:18:20.621 SGL Metadata Pointer: Not Supported 00:18:20.621 Oversized SGL: Not Supported 00:18:20.621 SGL Metadata Address: Not Supported 00:18:20.621 SGL Offset: Supported 00:18:20.621 Transport SGL Data Block: Not Supported 00:18:20.621 Replay Protected Memory Block: Not Supported 00:18:20.621 00:18:20.621 Firmware Slot Information 00:18:20.621 ========================= 00:18:20.621 Active slot: 0 00:18:20.621 00:18:20.621 Asymmetric Namespace Access 00:18:20.621 =========================== 00:18:20.621 Change Count : 0 00:18:20.621 Number of ANA Group Descriptors : 1 00:18:20.621 ANA Group Descriptor : 0 00:18:20.621 ANA Group ID : 1 00:18:20.621 Number of NSID Values : 1 00:18:20.621 Change Count : 0 00:18:20.621 ANA State : 1 00:18:20.621 Namespace Identifier : 1 00:18:20.621 00:18:20.621 Commands Supported and Effects 00:18:20.621 ============================== 00:18:20.621 Admin Commands 00:18:20.621 -------------- 00:18:20.621 Get Log Page (02h): Supported 00:18:20.621 Identify (06h): Supported 00:18:20.621 Abort (08h): Supported 00:18:20.621 Set Features (09h): Supported 00:18:20.621 Get Features (0Ah): Supported 00:18:20.621 Asynchronous Event Request (0Ch): Supported 00:18:20.621 Keep Alive (18h): Supported 00:18:20.621 I/O Commands 00:18:20.621 ------------ 00:18:20.621 Flush (00h): Supported 00:18:20.621 Write (01h): Supported LBA-Change 00:18:20.621 Read (02h): Supported 00:18:20.621 Write Zeroes (08h): Supported LBA-Change 00:18:20.621 Dataset Management (09h): Supported 00:18:20.621 00:18:20.621 Error Log 00:18:20.621 ========= 00:18:20.621 Entry: 0 00:18:20.621 Error Count: 0x3 00:18:20.621 Submission Queue Id: 0x0 00:18:20.621 Command Id: 0x5 00:18:20.621 Phase Bit: 0 00:18:20.621 Status Code: 0x2 00:18:20.621 Status Code Type: 0x0 00:18:20.621 Do Not Retry: 1 00:18:20.621 Error Location: 0x28 00:18:20.621 LBA: 0x0 00:18:20.621 Namespace: 0x0 00:18:20.621 Vendor Log Page: 0x0 00:18:20.621 ----------- 00:18:20.621 Entry: 1 00:18:20.621 Error Count: 0x2 00:18:20.621 Submission Queue Id: 0x0 00:18:20.621 Command Id: 0x5 00:18:20.621 Phase Bit: 0 00:18:20.621 Status Code: 0x2 00:18:20.621 Status Code Type: 0x0 00:18:20.621 Do Not Retry: 1 00:18:20.621 Error Location: 0x28 00:18:20.621 LBA: 0x0 00:18:20.621 Namespace: 0x0 00:18:20.621 Vendor Log Page: 0x0 00:18:20.621 ----------- 00:18:20.621 Entry: 2 00:18:20.621 Error Count: 0x1 00:18:20.621 Submission Queue Id: 0x0 00:18:20.621 Command Id: 0x4 00:18:20.621 Phase Bit: 0 00:18:20.621 Status Code: 0x2 00:18:20.621 Status Code Type: 0x0 00:18:20.621 Do Not Retry: 1 00:18:20.621 Error Location: 0x28 00:18:20.621 LBA: 0x0 00:18:20.621 Namespace: 0x0 00:18:20.621 Vendor Log Page: 0x0 00:18:20.621 00:18:20.621 Number of Queues 00:18:20.621 ================ 00:18:20.621 Number of I/O Submission Queues: 128 00:18:20.621 Number of I/O Completion Queues: 128 00:18:20.621 00:18:20.621 ZNS Specific Controller Data 00:18:20.621 ============================ 00:18:20.621 Zone Append Size Limit: 0 00:18:20.621 00:18:20.621 00:18:20.621 Active Namespaces 00:18:20.621 ================= 00:18:20.621 get_feature(0x05) failed 00:18:20.621 Namespace ID:1 00:18:20.621 Command Set Identifier: NVM (00h) 00:18:20.621 Deallocate: Supported 00:18:20.621 Deallocated/Unwritten Error: Not Supported 00:18:20.621 Deallocated Read Value: Unknown 00:18:20.621 Deallocate in Write Zeroes: Not Supported 00:18:20.621 Deallocated Guard Field: 0xFFFF 00:18:20.621 Flush: Supported 00:18:20.621 Reservation: Not Supported 00:18:20.621 Namespace Sharing Capabilities: Multiple Controllers 00:18:20.621 Size (in LBAs): 1310720 (5GiB) 00:18:20.621 Capacity (in LBAs): 1310720 (5GiB) 00:18:20.621 Utilization (in LBAs): 1310720 (5GiB) 00:18:20.621 UUID: 5d8258b2-0fa7-47c3-bda3-9ab168debd58 00:18:20.621 Thin Provisioning: Not Supported 00:18:20.621 Per-NS Atomic Units: Yes 00:18:20.621 Atomic Boundary Size (Normal): 0 00:18:20.621 Atomic Boundary Size (PFail): 0 00:18:20.621 Atomic Boundary Offset: 0 00:18:20.621 NGUID/EUI64 Never Reused: No 00:18:20.621 ANA group ID: 1 00:18:20.621 Namespace Write Protected: No 00:18:20.621 Number of LBA Formats: 1 00:18:20.621 Current LBA Format: LBA Format #00 00:18:20.621 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:20.621 00:18:20.621 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:20.621 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:20.621 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:20.621 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:20.621 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:20.621 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:20.621 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:20.621 rmmod nvme_tcp 00:18:20.621 rmmod nvme_fabrics 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:20.880 09:55:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:20.880 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:20.880 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:20.880 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:20.880 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:20.880 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:20.880 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.880 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.880 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:21.138 09:55:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:21.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:21.965 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:21.965 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:21.965 ************************************ 00:18:21.965 END TEST nvmf_identify_kernel_target 00:18:21.965 ************************************ 00:18:21.965 00:18:21.965 real 0m3.364s 00:18:21.965 user 0m1.188s 00:18:21.965 sys 0m1.529s 00:18:21.965 09:55:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.965 09:55:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 09:55:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:21.965 09:55:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:21.965 09:55:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.965 09:55:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.965 ************************************ 00:18:21.965 START TEST nvmf_auth_host 00:18:21.965 ************************************ 00:18:21.965 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:22.225 * Looking for test storage... 00:18:22.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:22.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.225 --rc genhtml_branch_coverage=1 00:18:22.225 --rc genhtml_function_coverage=1 00:18:22.225 --rc genhtml_legend=1 00:18:22.225 --rc geninfo_all_blocks=1 00:18:22.225 --rc geninfo_unexecuted_blocks=1 00:18:22.225 00:18:22.225 ' 00:18:22.225 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:22.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.225 --rc genhtml_branch_coverage=1 00:18:22.225 --rc genhtml_function_coverage=1 00:18:22.225 --rc genhtml_legend=1 00:18:22.225 --rc geninfo_all_blocks=1 00:18:22.226 --rc geninfo_unexecuted_blocks=1 00:18:22.226 00:18:22.226 ' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.226 --rc genhtml_branch_coverage=1 00:18:22.226 --rc genhtml_function_coverage=1 00:18:22.226 --rc genhtml_legend=1 00:18:22.226 --rc geninfo_all_blocks=1 00:18:22.226 --rc geninfo_unexecuted_blocks=1 00:18:22.226 00:18:22.226 ' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.226 --rc genhtml_branch_coverage=1 00:18:22.226 --rc genhtml_function_coverage=1 00:18:22.226 --rc genhtml_legend=1 00:18:22.226 --rc geninfo_all_blocks=1 00:18:22.226 --rc geninfo_unexecuted_blocks=1 00:18:22.226 00:18:22.226 ' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.226 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.226 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:22.227 Cannot find device "nvmf_init_br" 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:22.227 Cannot find device "nvmf_init_br2" 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:22.227 Cannot find device "nvmf_tgt_br" 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.227 Cannot find device "nvmf_tgt_br2" 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:22.227 Cannot find device "nvmf_init_br" 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:22.227 Cannot find device "nvmf_init_br2" 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:22.227 Cannot find device "nvmf_tgt_br" 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:18:22.227 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:22.486 Cannot find device "nvmf_tgt_br2" 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:22.486 Cannot find device "nvmf_br" 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:22.486 Cannot find device "nvmf_init_if" 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:22.486 Cannot find device "nvmf_init_if2" 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.486 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.487 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:22.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:18:22.746 00:18:22.746 --- 10.0.0.3 ping statistics --- 00:18:22.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.746 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:22.746 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:22.746 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:18:22.746 00:18:22.746 --- 10.0.0.4 ping statistics --- 00:18:22.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.746 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:22.746 00:18:22.746 --- 10.0.0.1 ping statistics --- 00:18:22.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.746 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:22.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:18:22.746 00:18:22.746 --- 10.0.0.2 ping statistics --- 00:18:22.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.746 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:22.746 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78220 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78220 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78220 ']' 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.747 09:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.006 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.006 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:23.006 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.006 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.006 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b3548b14631c3b8d7fee6ded60153f3 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7Nn 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b3548b14631c3b8d7fee6ded60153f3 0 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b3548b14631c3b8d7fee6ded60153f3 0 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b3548b14631c3b8d7fee6ded60153f3 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7Nn 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7Nn 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.7Nn 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=730f8b0de9cf5117ed2d1c8907269e6dad82075b6992d2099fd4b2844183d126 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.D6L 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 730f8b0de9cf5117ed2d1c8907269e6dad82075b6992d2099fd4b2844183d126 3 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 730f8b0de9cf5117ed2d1c8907269e6dad82075b6992d2099fd4b2844183d126 3 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=730f8b0de9cf5117ed2d1c8907269e6dad82075b6992d2099fd4b2844183d126 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:23.266 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.D6L 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.D6L 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.D6L 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cdc5c2623c03e2eaca8e475b14b25914a2ac22991bfa49e7 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZXZ 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cdc5c2623c03e2eaca8e475b14b25914a2ac22991bfa49e7 0 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cdc5c2623c03e2eaca8e475b14b25914a2ac22991bfa49e7 0 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cdc5c2623c03e2eaca8e475b14b25914a2ac22991bfa49e7 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZXZ 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZXZ 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ZXZ 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5acc7904efa2857f931db6164a5fba3d08644285dd66d755 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WLe 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5acc7904efa2857f931db6164a5fba3d08644285dd66d755 2 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5acc7904efa2857f931db6164a5fba3d08644285dd66d755 2 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5acc7904efa2857f931db6164a5fba3d08644285dd66d755 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:23.267 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.526 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WLe 00:18:23.526 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WLe 00:18:23.526 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WLe 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9a9ed969c2a50f7924959c4c008d68fc 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.x74 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9a9ed969c2a50f7924959c4c008d68fc 1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9a9ed969c2a50f7924959c4c008d68fc 1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9a9ed969c2a50f7924959c4c008d68fc 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.x74 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.x74 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.x74 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=888e43e9e02ba9bdc3ba1d0e202e4c54 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.w2X 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 888e43e9e02ba9bdc3ba1d0e202e4c54 1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 888e43e9e02ba9bdc3ba1d0e202e4c54 1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=888e43e9e02ba9bdc3ba1d0e202e4c54 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.w2X 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.w2X 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.w2X 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13d73a5dc1c27218ec0f53640ee5c59ecb605dcf6e1b3bb5 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5Kl 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13d73a5dc1c27218ec0f53640ee5c59ecb605dcf6e1b3bb5 2 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13d73a5dc1c27218ec0f53640ee5c59ecb605dcf6e1b3bb5 2 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13d73a5dc1c27218ec0f53640ee5c59ecb605dcf6e1b3bb5 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5Kl 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5Kl 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5Kl 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ab3ecda5a729fd64849b99e83c242dd 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6kZ 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ab3ecda5a729fd64849b99e83c242dd 0 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ab3ecda5a729fd64849b99e83c242dd 0 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ab3ecda5a729fd64849b99e83c242dd 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:23.527 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6kZ 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6kZ 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6kZ 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cc8c4bdf1dd1bd40282f9a5c0b48b921e98ccdc26faa37b7172256bed5cffda3 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PgB 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cc8c4bdf1dd1bd40282f9a5c0b48b921e98ccdc26faa37b7172256bed5cffda3 3 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cc8c4bdf1dd1bd40282f9a5c0b48b921e98ccdc26faa37b7172256bed5cffda3 3 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cc8c4bdf1dd1bd40282f9a5c0b48b921e98ccdc26faa37b7172256bed5cffda3 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PgB 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PgB 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PgB 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78220 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78220 ']' 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.787 09:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7Nn 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.D6L ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D6L 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ZXZ 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WLe ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WLe 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.x74 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.w2X ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.w2X 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5Kl 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6kZ ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6kZ 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PgB 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:24.048 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:24.307 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:24.307 09:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:24.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:24.565 Waiting for block devices as requested 00:18:24.565 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:24.824 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:25.392 No valid GPT data, bailing 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:25.392 No valid GPT data, bailing 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:25.392 No valid GPT data, bailing 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:25.392 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:25.652 No valid GPT data, bailing 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -a 10.0.0.1 -t tcp -s 4420 00:18:25.652 00:18:25.652 Discovery Log Number of Records 2, Generation counter 2 00:18:25.652 =====Discovery Log Entry 0====== 00:18:25.652 trtype: tcp 00:18:25.652 adrfam: ipv4 00:18:25.652 subtype: current discovery subsystem 00:18:25.652 treq: not specified, sq flow control disable supported 00:18:25.652 portid: 1 00:18:25.652 trsvcid: 4420 00:18:25.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:25.652 traddr: 10.0.0.1 00:18:25.652 eflags: none 00:18:25.652 sectype: none 00:18:25.652 =====Discovery Log Entry 1====== 00:18:25.652 trtype: tcp 00:18:25.652 adrfam: ipv4 00:18:25.652 subtype: nvme subsystem 00:18:25.652 treq: not specified, sq flow control disable supported 00:18:25.652 portid: 1 00:18:25.652 trsvcid: 4420 00:18:25.652 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:25.652 traddr: 10.0.0.1 00:18:25.652 eflags: none 00:18:25.652 sectype: none 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:25.652 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.912 09:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.912 nvme0n1 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.912 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.173 nvme0n1 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.173 nvme0n1 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.173 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:26.434 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.435 nvme0n1 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.435 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.695 nvme0n1 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.695 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.955 nvme0n1 00:18:26.955 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.955 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.955 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.955 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.955 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.955 09:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:26.955 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:27.212 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:27.212 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:27.212 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.213 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.470 nvme0n1 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.471 nvme0n1 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.471 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.730 nvme0n1 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.730 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.731 09:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.990 nvme0n1 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.990 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.249 nvme0n1 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.249 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:28.876 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:28.876 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:28.876 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:28.876 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.877 09:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.158 nvme0n1 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:29.158 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.159 nvme0n1 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.159 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.431 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.432 nvme0n1 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.432 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.691 nvme0n1 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.691 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.952 09:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 nvme0n1 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.211 09:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.587 09:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.846 nvme0n1 00:18:31.846 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.846 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.846 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.846 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.846 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.846 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.104 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.363 nvme0n1 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.363 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.931 nvme0n1 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.931 09:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.931 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.931 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.931 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:32.931 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.931 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.204 nvme0n1 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.204 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.462 nvme0n1 00:18:33.462 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.462 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.462 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.462 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.462 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.462 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.720 09:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 nvme0n1 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.286 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.852 nvme0n1 00:18:34.852 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.852 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.852 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.852 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.852 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.852 09:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.852 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.419 nvme0n1 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:35.419 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.420 09:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.352 nvme0n1 00:18:36.352 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.352 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.352 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.352 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.352 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.352 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.353 09:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.288 nvme0n1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.288 nvme0n1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.288 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.546 nvme0n1 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:37.546 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.547 nvme0n1 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.547 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.805 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.806 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:37.806 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.806 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.806 nvme0n1 00:18:37.806 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.806 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.806 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.806 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.806 09:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.806 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.065 nvme0n1 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.065 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.325 nvme0n1 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.325 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.326 nvme0n1 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.326 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.585 nvme0n1 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.585 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.586 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.845 nvme0n1 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.845 09:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:38.845 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.846 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.111 nvme0n1 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.111 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.370 nvme0n1 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.370 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.371 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.371 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.371 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.371 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.371 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.630 nvme0n1 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.630 09:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.890 nvme0n1 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.890 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.149 nvme0n1 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:40.149 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.150 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.409 nvme0n1 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.409 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.976 nvme0n1 00:18:40.976 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.976 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.976 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.976 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.976 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.976 09:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.976 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.235 nvme0n1 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.235 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.494 nvme0n1 00:18:41.494 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.753 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.754 09:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.013 nvme0n1 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.013 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.014 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.583 nvme0n1 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.583 09:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 nvme0n1 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.150 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.718 nvme0n1 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.718 09:56:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.287 nvme0n1 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.287 09:56:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.855 nvme0n1 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:44.855 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.856 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.423 nvme0n1 00:18:45.423 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.423 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.423 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.423 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.423 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.423 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.682 nvme0n1 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.682 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.683 09:56:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.942 nvme0n1 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.942 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.943 nvme0n1 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.943 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.202 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.203 nvme0n1 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.203 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.463 nvme0n1 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.464 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 nvme0n1 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 nvme0n1 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 09:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.985 nvme0n1 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.985 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.245 nvme0n1 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:47.245 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.246 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.506 nvme0n1 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.506 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 nvme0n1 00:18:47.765 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.765 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.765 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.765 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.765 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.765 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.766 09:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.025 nvme0n1 00:18:48.025 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.025 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.025 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.025 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.025 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.025 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.026 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.285 nvme0n1 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:48.285 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.286 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.546 nvme0n1 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.546 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.547 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.806 nvme0n1 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.807 09:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.066 nvme0n1 00:18:49.066 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.066 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.066 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.066 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.066 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.066 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.326 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.585 nvme0n1 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.585 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.586 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.586 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.586 09:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.845 nvme0n1 00:18:49.845 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.845 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.845 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.845 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.845 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.845 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.104 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.363 nvme0n1 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.363 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.623 nvme0n1 00:18:50.623 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.623 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.623 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.623 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.623 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.623 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGIzNTQ4YjE0NjMxYzNiOGQ3ZmVlNmRlZDYwMTUzZjNC3SAe: 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: ]] 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzMwZjhiMGRlOWNmNTExN2VkMmQxYzg5MDcyNjllNmRhZDgyMDc1YjY5OTJkMjA5OWZkNGIyODQ0MTgzZDEyNn5WUho=: 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.883 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.884 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.884 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.884 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.884 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.884 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.884 09:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.452 nvme0n1 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.452 09:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.022 nvme0n1 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.022 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.592 nvme0n1 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNkNzNhNWRjMWMyNzIxOGVjMGY1MzY0MGVlNWM1OWVjYjYwNWRjZjZlMWIzYmI1Km7pog==: 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiM2VjZGE1YTcyOWZkNjQ4NDliOTllODNjMjQyZGRK6y/N: 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.592 09:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.162 nvme0n1 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2M4YzRiZGYxZGQxYmQ0MDI4MmY5YTVjMGI0OGI5MjFlOThjY2RjMjZmYWEzN2I3MTcyMjU2YmVkNWNmZmRhM5n1X5Y=: 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.162 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.732 nvme0n1 00:18:53.732 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.733 request: 00:18:53.733 { 00:18:53.733 "name": "nvme0", 00:18:53.733 "trtype": "tcp", 00:18:53.733 "traddr": "10.0.0.1", 00:18:53.733 "adrfam": "ipv4", 00:18:53.733 "trsvcid": "4420", 00:18:53.733 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:53.733 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:53.733 "prchk_reftag": false, 00:18:53.733 "prchk_guard": false, 00:18:53.733 "hdgst": false, 00:18:53.733 "ddgst": false, 00:18:53.733 "allow_unrecognized_csi": false, 00:18:53.733 "method": "bdev_nvme_attach_controller", 00:18:53.733 "req_id": 1 00:18:53.733 } 00:18:53.733 Got JSON-RPC error response 00:18:53.733 response: 00:18:53.733 { 00:18:53.733 "code": -5, 00:18:53.733 "message": "Input/output error" 00:18:53.733 } 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.733 09:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.994 request: 00:18:53.994 { 00:18:53.994 "name": "nvme0", 00:18:53.994 "trtype": "tcp", 00:18:53.994 "traddr": "10.0.0.1", 00:18:53.994 "adrfam": "ipv4", 00:18:53.994 "trsvcid": "4420", 00:18:53.994 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:53.994 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:53.994 "prchk_reftag": false, 00:18:53.994 "prchk_guard": false, 00:18:53.994 "hdgst": false, 00:18:53.994 "ddgst": false, 00:18:53.994 "dhchap_key": "key2", 00:18:53.994 "allow_unrecognized_csi": false, 00:18:53.994 "method": "bdev_nvme_attach_controller", 00:18:53.994 "req_id": 1 00:18:53.994 } 00:18:53.994 Got JSON-RPC error response 00:18:53.994 response: 00:18:53.994 { 00:18:53.994 "code": -5, 00:18:53.994 "message": "Input/output error" 00:18:53.994 } 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.994 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.995 request: 00:18:53.995 { 00:18:53.995 "name": "nvme0", 00:18:53.995 "trtype": "tcp", 00:18:53.995 "traddr": "10.0.0.1", 00:18:53.995 "adrfam": "ipv4", 00:18:53.995 "trsvcid": "4420", 00:18:53.995 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:53.995 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:53.995 "prchk_reftag": false, 00:18:53.995 "prchk_guard": false, 00:18:53.995 "hdgst": false, 00:18:53.995 "ddgst": false, 00:18:53.995 "dhchap_key": "key1", 00:18:53.995 "dhchap_ctrlr_key": "ckey2", 00:18:53.995 "allow_unrecognized_csi": false, 00:18:53.995 "method": "bdev_nvme_attach_controller", 00:18:53.995 "req_id": 1 00:18:53.995 } 00:18:53.995 Got JSON-RPC error response 00:18:53.995 response: 00:18:53.995 { 00:18:53.995 "code": -5, 00:18:53.995 "message": "Input/output error" 00:18:53.995 } 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.995 nvme0n1 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.995 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.255 request: 00:18:54.255 { 00:18:54.255 "name": "nvme0", 00:18:54.255 "dhchap_key": "key1", 00:18:54.255 "dhchap_ctrlr_key": "ckey2", 00:18:54.255 "method": "bdev_nvme_set_keys", 00:18:54.255 "req_id": 1 00:18:54.255 } 00:18:54.255 Got JSON-RPC error response 00:18:54.255 response: 00:18:54.255 { 00:18:54.255 "code": -13, 00:18:54.255 "message": "Permission denied" 00:18:54.255 } 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:54.255 09:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:55.192 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.192 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:55.192 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.192 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.192 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RjNWMyNjIzYzAzZTJlYWNhOGU0NzViMTRiMjU5MTRhMmFjMjI5OTFiZmE0OWU3nN04uQ==: 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: ]] 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWFjYzc5MDRlZmEyODU3ZjkzMWRiNjE2NGE1ZmJhM2QwODY0NDI4NWRkNjZkNzU11BHGyg==: 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.452 nvme0n1 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWE5ZWQ5NjljMmE1MGY3OTI0OTU5YzRjMDA4ZDY4ZmM2EsA1: 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: ]] 00:18:55.452 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg4ZTQzZTllMDJiYTliZGMzYmExZDBlMjAyZTRjNTTHOIm4: 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.453 request: 00:18:55.453 { 00:18:55.453 "name": "nvme0", 00:18:55.453 "dhchap_key": "key2", 00:18:55.453 "dhchap_ctrlr_key": "ckey1", 00:18:55.453 "method": "bdev_nvme_set_keys", 00:18:55.453 "req_id": 1 00:18:55.453 } 00:18:55.453 Got JSON-RPC error response 00:18:55.453 response: 00:18:55.453 { 00:18:55.453 "code": -13, 00:18:55.453 "message": "Permission denied" 00:18:55.453 } 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:55.453 09:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:56.833 rmmod nvme_tcp 00:18:56.833 rmmod nvme_fabrics 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78220 ']' 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78220 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78220 ']' 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78220 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78220 00:18:56.833 killing process with pid 78220 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78220' 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78220 00:18:56.833 09:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78220 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:56.833 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:57.098 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:57.389 09:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:57.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:57.956 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:57.956 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:58.214 09:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.7Nn /tmp/spdk.key-null.ZXZ /tmp/spdk.key-sha256.x74 /tmp/spdk.key-sha384.5Kl /tmp/spdk.key-sha512.PgB /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:58.214 09:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:58.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.473 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:58.473 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:58.473 00:18:58.473 real 0m36.488s 00:18:58.473 user 0m33.242s 00:18:58.473 sys 0m4.089s 00:18:58.473 ************************************ 00:18:58.473 END TEST nvmf_auth_host 00:18:58.473 ************************************ 00:18:58.473 09:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.473 09:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.473 09:56:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:58.473 09:56:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:58.473 09:56:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.473 09:56:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.473 09:56:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.473 ************************************ 00:18:58.473 START TEST nvmf_digest 00:18:58.473 ************************************ 00:18:58.473 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:58.733 * Looking for test storage... 00:18:58.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:58.733 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:58.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.734 --rc genhtml_branch_coverage=1 00:18:58.734 --rc genhtml_function_coverage=1 00:18:58.734 --rc genhtml_legend=1 00:18:58.734 --rc geninfo_all_blocks=1 00:18:58.734 --rc geninfo_unexecuted_blocks=1 00:18:58.734 00:18:58.734 ' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:58.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.734 --rc genhtml_branch_coverage=1 00:18:58.734 --rc genhtml_function_coverage=1 00:18:58.734 --rc genhtml_legend=1 00:18:58.734 --rc geninfo_all_blocks=1 00:18:58.734 --rc geninfo_unexecuted_blocks=1 00:18:58.734 00:18:58.734 ' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:58.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.734 --rc genhtml_branch_coverage=1 00:18:58.734 --rc genhtml_function_coverage=1 00:18:58.734 --rc genhtml_legend=1 00:18:58.734 --rc geninfo_all_blocks=1 00:18:58.734 --rc geninfo_unexecuted_blocks=1 00:18:58.734 00:18:58.734 ' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:58.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.734 --rc genhtml_branch_coverage=1 00:18:58.734 --rc genhtml_function_coverage=1 00:18:58.734 --rc genhtml_legend=1 00:18:58.734 --rc geninfo_all_blocks=1 00:18:58.734 --rc geninfo_unexecuted_blocks=1 00:18:58.734 00:18:58.734 ' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.734 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:58.734 Cannot find device "nvmf_init_br" 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:58.734 Cannot find device "nvmf_init_br2" 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:58.734 Cannot find device "nvmf_tgt_br" 00:18:58.734 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:58.735 09:56:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.994 Cannot find device "nvmf_tgt_br2" 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:58.994 Cannot find device "nvmf_init_br" 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:58.994 Cannot find device "nvmf_init_br2" 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:58.994 Cannot find device "nvmf_tgt_br" 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:58.994 Cannot find device "nvmf_tgt_br2" 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:58.994 Cannot find device "nvmf_br" 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:58.994 Cannot find device "nvmf_init_if" 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:58.994 Cannot find device "nvmf_init_if2" 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:58.994 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:59.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:59.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:59.252 00:18:59.252 --- 10.0.0.3 ping statistics --- 00:18:59.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.252 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:59.252 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:59.252 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:18:59.252 00:18:59.252 --- 10.0.0.4 ping statistics --- 00:18:59.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.252 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:59.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:59.252 00:18:59.252 --- 10.0.0.1 ping statistics --- 00:18:59.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.252 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:59.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:59.252 00:18:59.252 --- 10.0.0.2 ping statistics --- 00:18:59.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.252 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:59.252 ************************************ 00:18:59.252 START TEST nvmf_digest_clean 00:18:59.252 ************************************ 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:59.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79855 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79855 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79855 ']' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.252 09:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:59.252 [2024-12-06 09:56:24.446872] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:59.252 [2024-12-06 09:56:24.446958] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.510 [2024-12-06 09:56:24.602760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.510 [2024-12-06 09:56:24.678346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.510 [2024-12-06 09:56:24.678406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.510 [2024-12-06 09:56:24.678421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.510 [2024-12-06 09:56:24.678432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.510 [2024-12-06 09:56:24.678442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.510 [2024-12-06 09:56:24.678974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:00.442 [2024-12-06 09:56:25.570731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:00.442 null0 00:19:00.442 [2024-12-06 09:56:25.635125] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.442 [2024-12-06 09:56:25.659315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79887 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79887 /var/tmp/bperf.sock 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79887 ']' 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:00.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.442 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:00.700 [2024-12-06 09:56:25.714566] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:00.700 [2024-12-06 09:56:25.714892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79887 ] 00:19:00.700 [2024-12-06 09:56:25.863817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.700 [2024-12-06 09:56:25.930256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.957 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.957 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:00.957 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:00.957 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:00.957 09:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:01.217 [2024-12-06 09:56:26.255864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:01.217 09:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:01.217 09:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:01.476 nvme0n1 00:19:01.476 09:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:01.476 09:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:01.738 Running I/O for 2 seconds... 00:19:03.616 16510.00 IOPS, 64.49 MiB/s [2024-12-06T09:56:28.888Z] 16700.50 IOPS, 65.24 MiB/s 00:19:03.616 Latency(us) 00:19:03.616 [2024-12-06T09:56:28.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.616 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:03.616 nvme0n1 : 2.01 16716.29 65.30 0.00 0.00 7651.92 6940.86 18826.71 00:19:03.616 [2024-12-06T09:56:28.888Z] =================================================================================================================== 00:19:03.616 [2024-12-06T09:56:28.888Z] Total : 16716.29 65.30 0.00 0.00 7651.92 6940.86 18826.71 00:19:03.616 { 00:19:03.616 "results": [ 00:19:03.616 { 00:19:03.616 "job": "nvme0n1", 00:19:03.616 "core_mask": "0x2", 00:19:03.616 "workload": "randread", 00:19:03.616 "status": "finished", 00:19:03.616 "queue_depth": 128, 00:19:03.616 "io_size": 4096, 00:19:03.616 "runtime": 2.005768, 00:19:03.616 "iops": 16716.29021900838, 00:19:03.616 "mibps": 65.29800866800149, 00:19:03.616 "io_failed": 0, 00:19:03.616 "io_timeout": 0, 00:19:03.616 "avg_latency_us": 7651.918690956811, 00:19:03.616 "min_latency_us": 6940.858181818182, 00:19:03.616 "max_latency_us": 18826.705454545456 00:19:03.616 } 00:19:03.616 ], 00:19:03.616 "core_count": 1 00:19:03.616 } 00:19:03.616 09:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:03.616 09:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:03.616 09:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:03.616 09:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:03.616 | select(.opcode=="crc32c") 00:19:03.616 | "\(.module_name) \(.executed)"' 00:19:03.616 09:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79887 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79887 ']' 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79887 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79887 00:19:03.875 killing process with pid 79887 00:19:03.875 Received shutdown signal, test time was about 2.000000 seconds 00:19:03.875 00:19:03.875 Latency(us) 00:19:03.875 [2024-12-06T09:56:29.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.875 [2024-12-06T09:56:29.147Z] =================================================================================================================== 00:19:03.875 [2024-12-06T09:56:29.147Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79887' 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79887 00:19:03.875 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79887 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79934 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79934 /var/tmp/bperf.sock 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79934 ']' 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.135 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:04.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:04.136 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.136 09:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:04.136 [2024-12-06 09:56:29.381219] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:04.136 [2024-12-06 09:56:29.381507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:04.136 Zero copy mechanism will not be used. 00:19:04.136 llocations --file-prefix=spdk_pid79934 ] 00:19:04.395 [2024-12-06 09:56:29.532693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.395 [2024-12-06 09:56:29.587175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.333 09:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.333 09:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:05.333 09:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:05.333 09:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:05.333 09:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:05.592 [2024-12-06 09:56:30.642175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.592 09:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:05.592 09:56:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:05.851 nvme0n1 00:19:05.851 09:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:05.851 09:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:06.110 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:06.110 Zero copy mechanism will not be used. 00:19:06.110 Running I/O for 2 seconds... 00:19:07.983 8640.00 IOPS, 1080.00 MiB/s [2024-12-06T09:56:33.255Z] 8648.00 IOPS, 1081.00 MiB/s 00:19:07.983 Latency(us) 00:19:07.983 [2024-12-06T09:56:33.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.983 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:07.983 nvme0n1 : 2.00 8649.78 1081.22 0.00 0.00 1846.82 1653.29 3291.69 00:19:07.983 [2024-12-06T09:56:33.255Z] =================================================================================================================== 00:19:07.983 [2024-12-06T09:56:33.255Z] Total : 8649.78 1081.22 0.00 0.00 1846.82 1653.29 3291.69 00:19:07.983 { 00:19:07.983 "results": [ 00:19:07.983 { 00:19:07.983 "job": "nvme0n1", 00:19:07.983 "core_mask": "0x2", 00:19:07.983 "workload": "randread", 00:19:07.983 "status": "finished", 00:19:07.983 "queue_depth": 16, 00:19:07.983 "io_size": 131072, 00:19:07.983 "runtime": 2.003287, 00:19:07.983 "iops": 8649.784079864743, 00:19:07.983 "mibps": 1081.2230099830929, 00:19:07.983 "io_failed": 0, 00:19:07.983 "io_timeout": 0, 00:19:07.983 "avg_latency_us": 1846.8217778897003, 00:19:07.983 "min_latency_us": 1653.2945454545454, 00:19:07.983 "max_latency_us": 3291.6945454545453 00:19:07.983 } 00:19:07.983 ], 00:19:07.983 "core_count": 1 00:19:07.983 } 00:19:07.983 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:07.983 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:07.983 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:07.983 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:07.983 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:07.983 | select(.opcode=="crc32c") 00:19:07.983 | "\(.module_name) \(.executed)"' 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79934 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79934 ']' 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79934 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79934 00:19:08.551 killing process with pid 79934 00:19:08.551 Received shutdown signal, test time was about 2.000000 seconds 00:19:08.551 00:19:08.551 Latency(us) 00:19:08.551 [2024-12-06T09:56:33.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.551 [2024-12-06T09:56:33.823Z] =================================================================================================================== 00:19:08.551 [2024-12-06T09:56:33.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79934' 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79934 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79934 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80000 00:19:08.551 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80000 /var/tmp/bperf.sock 00:19:08.811 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80000 ']' 00:19:08.811 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:08.811 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.811 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:08.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:08.811 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.811 09:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:08.811 [2024-12-06 09:56:33.877639] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:08.811 [2024-12-06 09:56:33.877935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80000 ] 00:19:08.811 [2024-12-06 09:56:34.024421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.811 [2024-12-06 09:56:34.075905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.071 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.071 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:09.071 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:09.071 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:09.071 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:09.330 [2024-12-06 09:56:34.441284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.330 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:09.330 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:09.590 nvme0n1 00:19:09.868 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:09.868 09:56:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:09.868 Running I/O for 2 seconds... 00:19:11.742 18543.00 IOPS, 72.43 MiB/s [2024-12-06T09:56:37.273Z] 18479.00 IOPS, 72.18 MiB/s 00:19:12.001 Latency(us) 00:19:12.001 [2024-12-06T09:56:37.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.001 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.001 nvme0n1 : 2.01 18476.23 72.17 0.00 0.00 6921.77 2293.76 15013.70 00:19:12.001 [2024-12-06T09:56:37.273Z] =================================================================================================================== 00:19:12.001 [2024-12-06T09:56:37.273Z] Total : 18476.23 72.17 0.00 0.00 6921.77 2293.76 15013.70 00:19:12.001 { 00:19:12.001 "results": [ 00:19:12.001 { 00:19:12.001 "job": "nvme0n1", 00:19:12.001 "core_mask": "0x2", 00:19:12.001 "workload": "randwrite", 00:19:12.001 "status": "finished", 00:19:12.001 "queue_depth": 128, 00:19:12.001 "io_size": 4096, 00:19:12.001 "runtime": 2.007228, 00:19:12.001 "iops": 18476.226915925843, 00:19:12.001 "mibps": 72.17276139033532, 00:19:12.001 "io_failed": 0, 00:19:12.001 "io_timeout": 0, 00:19:12.001 "avg_latency_us": 6921.765283052169, 00:19:12.001 "min_latency_us": 2293.76, 00:19:12.001 "max_latency_us": 15013.701818181818 00:19:12.001 } 00:19:12.001 ], 00:19:12.001 "core_count": 1 00:19:12.001 } 00:19:12.001 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:12.001 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:12.001 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:12.001 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:12.001 | select(.opcode=="crc32c") 00:19:12.001 | "\(.module_name) \(.executed)"' 00:19:12.001 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:12.260 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80000 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80000 ']' 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80000 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80000 00:19:12.261 killing process with pid 80000 00:19:12.261 Received shutdown signal, test time was about 2.000000 seconds 00:19:12.261 00:19:12.261 Latency(us) 00:19:12.261 [2024-12-06T09:56:37.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.261 [2024-12-06T09:56:37.533Z] =================================================================================================================== 00:19:12.261 [2024-12-06T09:56:37.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80000' 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80000 00:19:12.261 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80000 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80054 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80054 /var/tmp/bperf.sock 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80054 ']' 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:12.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.520 09:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:12.520 [2024-12-06 09:56:37.662443] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:12.520 [2024-12-06 09:56:37.662668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80054 ] 00:19:12.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:12.520 Zero copy mechanism will not be used. 00:19:12.779 [2024-12-06 09:56:37.802775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.779 [2024-12-06 09:56:37.851054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.716 09:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.716 09:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:13.716 09:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:13.716 09:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:13.716 09:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:13.716 [2024-12-06 09:56:38.908441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:13.716 09:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:13.716 09:56:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:13.976 nvme0n1 00:19:14.235 09:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:14.235 09:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:14.235 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:14.235 Zero copy mechanism will not be used. 00:19:14.235 Running I/O for 2 seconds... 00:19:16.108 7480.00 IOPS, 935.00 MiB/s [2024-12-06T09:56:41.380Z] 7526.00 IOPS, 940.75 MiB/s 00:19:16.109 Latency(us) 00:19:16.109 [2024-12-06T09:56:41.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.109 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:16.109 nvme0n1 : 2.00 7523.69 940.46 0.00 0.00 2121.88 1660.74 10902.81 00:19:16.109 [2024-12-06T09:56:41.381Z] =================================================================================================================== 00:19:16.109 [2024-12-06T09:56:41.381Z] Total : 7523.69 940.46 0.00 0.00 2121.88 1660.74 10902.81 00:19:16.109 { 00:19:16.109 "results": [ 00:19:16.109 { 00:19:16.109 "job": "nvme0n1", 00:19:16.109 "core_mask": "0x2", 00:19:16.109 "workload": "randwrite", 00:19:16.109 "status": "finished", 00:19:16.109 "queue_depth": 16, 00:19:16.109 "io_size": 131072, 00:19:16.109 "runtime": 2.002608, 00:19:16.109 "iops": 7523.689109401341, 00:19:16.109 "mibps": 940.4611386751676, 00:19:16.109 "io_failed": 0, 00:19:16.109 "io_timeout": 0, 00:19:16.109 "avg_latency_us": 2121.876134840138, 00:19:16.109 "min_latency_us": 1660.7418181818182, 00:19:16.109 "max_latency_us": 10902.807272727272 00:19:16.109 } 00:19:16.109 ], 00:19:16.109 "core_count": 1 00:19:16.109 } 00:19:16.109 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:16.109 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:16.368 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:16.368 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:16.368 | select(.opcode=="crc32c") 00:19:16.368 | "\(.module_name) \(.executed)"' 00:19:16.368 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80054 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80054 ']' 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80054 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80054 00:19:16.627 killing process with pid 80054 00:19:16.627 Received shutdown signal, test time was about 2.000000 seconds 00:19:16.627 00:19:16.627 Latency(us) 00:19:16.627 [2024-12-06T09:56:41.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.627 [2024-12-06T09:56:41.899Z] =================================================================================================================== 00:19:16.627 [2024-12-06T09:56:41.899Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80054' 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80054 00:19:16.627 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80054 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79855 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79855 ']' 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79855 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79855 00:19:16.886 killing process with pid 79855 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79855' 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79855 00:19:16.886 09:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79855 00:19:17.145 ************************************ 00:19:17.145 END TEST nvmf_digest_clean 00:19:17.145 ************************************ 00:19:17.145 00:19:17.146 real 0m17.802s 00:19:17.146 user 0m34.144s 00:19:17.146 sys 0m4.830s 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:17.146 ************************************ 00:19:17.146 START TEST nvmf_digest_error 00:19:17.146 ************************************ 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80140 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80140 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80140 ']' 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.146 09:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:17.146 [2024-12-06 09:56:42.306234] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:17.146 [2024-12-06 09:56:42.306326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.405 [2024-12-06 09:56:42.452284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.405 [2024-12-06 09:56:42.497150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.405 [2024-12-06 09:56:42.497211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.405 [2024-12-06 09:56:42.497222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.405 [2024-12-06 09:56:42.497229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.405 [2024-12-06 09:56:42.497236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.405 [2024-12-06 09:56:42.497613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.972 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.972 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:17.972 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:17.972 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.972 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:18.232 [2024-12-06 09:56:43.278117] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:18.232 [2024-12-06 09:56:43.354036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.232 null0 00:19:18.232 [2024-12-06 09:56:43.415986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.232 [2024-12-06 09:56:43.440164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:18.232 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80172 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80172 /var/tmp/bperf.sock 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80172 ']' 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.233 09:56:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:18.492 [2024-12-06 09:56:43.504650] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:18.492 [2024-12-06 09:56:43.504805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80172 ] 00:19:18.492 [2024-12-06 09:56:43.647676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.492 [2024-12-06 09:56:43.695143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.752 [2024-12-06 09:56:43.763820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:19.321 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.321 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:19.321 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:19.321 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:19.581 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:19.581 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.581 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.581 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.581 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:19.581 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:19.840 nvme0n1 00:19:19.840 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:19.840 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.840 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.840 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.840 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:19.840 09:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:19.840 Running I/O for 2 seconds... 00:19:20.098 [2024-12-06 09:56:45.126873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.098 [2024-12-06 09:56:45.126931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.126945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.140397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.140431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.140442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.153793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.153828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.153839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.167203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.167235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.167246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.180641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.180673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.180684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.194059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.194092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.194103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.207466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.207498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.207509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.220962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.220995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.221005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.234634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.234665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.234676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.248204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.248236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.248247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.261560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.261600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.261611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.274885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.274917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.274927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.288339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.288371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.288382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.301698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.301729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.301739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.315065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.315096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.315107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.328620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.328650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.328660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.342106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.342139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.342150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.099 [2024-12-06 09:56:45.355998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.099 [2024-12-06 09:56:45.356029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.099 [2024-12-06 09:56:45.356040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.369567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.369610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.369622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.383307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.383339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.383351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.396743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.396774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.396785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.410237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.410269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.410280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.423779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.423811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.423821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.437211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.437243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.437254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.450699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.450730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.450741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.464221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.464254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.464265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.477674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.477705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.477716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.491060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.491093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.491103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.504462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.504495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.504506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.517850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.517880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.517891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.531273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.531306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.531317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.545010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.545042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.545053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.559674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.559705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.559716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.573570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.573612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.573623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.587205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.587236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.587247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.600699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.600731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.600742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.614223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.614254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.614265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.358 [2024-12-06 09:56:45.627659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.358 [2024-12-06 09:56:45.627689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-12-06 09:56:45.627699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.641336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.641368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.641379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.654909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.654940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.654950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.668265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.668296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.668307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.681672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.681703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.681713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.695026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.695057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.695068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.708402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.708434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.708444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.721775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.721807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.721818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.735135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.735166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.735176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.749157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.749189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.749199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.763740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.763772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.763784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.778576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.778619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.778631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.792797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.792830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.792841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.806935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.806969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.806980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.821115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.821148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.821159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.835396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.835429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.835440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.849538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.849577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.849589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.863630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.863661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.863672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.618 [2024-12-06 09:56:45.878053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.618 [2024-12-06 09:56:45.878084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.618 [2024-12-06 09:56:45.878095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.877 [2024-12-06 09:56:45.891988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.877 [2024-12-06 09:56:45.892019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.877 [2024-12-06 09:56:45.892030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.877 [2024-12-06 09:56:45.905401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.877 [2024-12-06 09:56:45.905435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.877 [2024-12-06 09:56:45.905445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.877 [2024-12-06 09:56:45.918867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.877 [2024-12-06 09:56:45.918898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:45.918909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:45.932318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:45.932349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:45.932360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:45.945681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:45.945712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:45.945722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:45.958981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:45.959012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:45.959022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:45.972316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:45.972346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:45.972357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:45.991559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:45.991598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:45.991610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.004908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.004941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.004952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.018285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.018316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.018327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.031844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.031876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.031887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.045171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.045202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.045213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.058482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.058515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.058525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.071824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.071855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.071866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.085339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.085372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.085383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.098784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.098816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.098827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 18470.00 IOPS, 72.15 MiB/s [2024-12-06T09:56:46.150Z] [2024-12-06 09:56:46.113456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.113489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.113501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.126875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.126908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.126920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.878 [2024-12-06 09:56:46.140755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:20.878 [2024-12-06 09:56:46.140792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.878 [2024-12-06 09:56:46.140803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.137 [2024-12-06 09:56:46.154145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.137 [2024-12-06 09:56:46.154177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.137 [2024-12-06 09:56:46.154188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.137 [2024-12-06 09:56:46.167486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.137 [2024-12-06 09:56:46.167519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.137 [2024-12-06 09:56:46.167531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.137 [2024-12-06 09:56:46.180845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.137 [2024-12-06 09:56:46.180877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.137 [2024-12-06 09:56:46.180888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.137 [2024-12-06 09:56:46.194328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.137 [2024-12-06 09:56:46.194361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.137 [2024-12-06 09:56:46.194373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.137 [2024-12-06 09:56:46.207739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.207771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.207782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.221089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.221120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.221131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.234421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.234452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.234463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.247857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.247890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.247901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.261598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.261630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.261641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.274937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.274969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.274980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.288360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.288391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.288402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.301682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.301714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.301724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.315050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.315081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.315093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.328418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.328449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.328460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.341863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.341896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.341907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.355150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.355182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.355201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.368467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.368498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.368509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.382249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.382281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.382292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.138 [2024-12-06 09:56:46.396026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.138 [2024-12-06 09:56:46.396058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.138 [2024-12-06 09:56:46.396069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.397 [2024-12-06 09:56:46.409617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.397 [2024-12-06 09:56:46.409647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.397 [2024-12-06 09:56:46.409657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.423184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.423241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.423253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.436752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.436786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.436797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.450177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.450209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.450220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.463564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.463602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.476986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.477017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.477028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.490410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.490442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.490453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.503757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.503788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.503799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.517108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.517139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.517150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.530505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.530536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.530547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.543897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.543927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.543938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.557275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.557306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.557317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.571089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.571120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.571130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.585219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.585250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.585261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.598712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.598743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.598754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.612091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.612121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.612132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.625631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.625662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.625673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.639553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.639592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.639603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.652969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.653000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.653011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.398 [2024-12-06 09:56:46.666343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.398 [2024-12-06 09:56:46.666375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.398 [2024-12-06 09:56:46.666385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.679802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.679833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.679843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.693155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.693186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.693197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.706536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.706576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.706589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.719913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.719943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.719955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.733264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.733295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.733306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.746675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.746706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.746716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.759980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.760010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.760021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.773283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.773314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.773325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.786712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.786742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.786753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.800032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.800063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.658 [2024-12-06 09:56:46.800074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.658 [2024-12-06 09:56:46.813439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.658 [2024-12-06 09:56:46.813471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.659 [2024-12-06 09:56:46.813483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.659 [2024-12-06 09:56:46.827140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.659 [2024-12-06 09:56:46.827171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.659 [2024-12-06 09:56:46.827182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.659 [2024-12-06 09:56:46.840782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.659 [2024-12-06 09:56:46.840816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.659 [2024-12-06 09:56:46.840827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.659 [2024-12-06 09:56:46.860184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.659 [2024-12-06 09:56:46.860217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.659 [2024-12-06 09:56:46.860228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.659 [2024-12-06 09:56:46.873693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.659 [2024-12-06 09:56:46.873726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.659 [2024-12-06 09:56:46.873737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.659 [2024-12-06 09:56:46.887241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.659 [2024-12-06 09:56:46.887272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.659 [2024-12-06 09:56:46.887283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.659 [2024-12-06 09:56:46.901221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.659 [2024-12-06 09:56:46.901253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.659 [2024-12-06 09:56:46.901263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.659 [2024-12-06 09:56:46.915525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.659 [2024-12-06 09:56:46.915558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.659 [2024-12-06 09:56:46.915581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:46.929762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:46.929795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:46.929806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:46.943951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:46.943995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:46.944007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:46.958162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:46.958194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:46.958205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:46.972302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:46.972334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:46.972345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:46.986487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:46.986519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:46.986530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:47.000602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:47.000633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:47.000644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:47.014901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:47.014934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:47.014945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:47.029095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:47.029126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:47.029137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:47.043202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:47.043233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:47.043244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:47.057357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:47.057391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:47.057402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.918 [2024-12-06 09:56:47.071225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.918 [2024-12-06 09:56:47.071255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.918 [2024-12-06 09:56:47.071265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.919 [2024-12-06 09:56:47.084655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.919 [2024-12-06 09:56:47.084685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.919 [2024-12-06 09:56:47.084696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.919 [2024-12-06 09:56:47.097960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.919 [2024-12-06 09:56:47.097991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.919 [2024-12-06 09:56:47.098002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.919 18596.50 IOPS, 72.64 MiB/s [2024-12-06T09:56:47.191Z] [2024-12-06 09:56:47.112613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x759b50) 00:19:21.919 [2024-12-06 09:56:47.112644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.919 [2024-12-06 09:56:47.112655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.919 00:19:21.919 Latency(us) 00:19:21.919 [2024-12-06T09:56:47.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.919 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:21.919 nvme0n1 : 2.01 18579.08 72.57 0.00 0.00 6885.20 6583.39 25856.93 00:19:21.919 [2024-12-06T09:56:47.191Z] =================================================================================================================== 00:19:21.919 [2024-12-06T09:56:47.191Z] Total : 18579.08 72.57 0.00 0.00 6885.20 6583.39 25856.93 00:19:21.919 { 00:19:21.919 "results": [ 00:19:21.919 { 00:19:21.919 "job": "nvme0n1", 00:19:21.919 "core_mask": "0x2", 00:19:21.919 "workload": "randread", 00:19:21.919 "status": "finished", 00:19:21.919 "queue_depth": 128, 00:19:21.919 "io_size": 4096, 00:19:21.919 "runtime": 2.008765, 00:19:21.919 "iops": 18579.077194196434, 00:19:21.919 "mibps": 72.57452028982982, 00:19:21.919 "io_failed": 0, 00:19:21.919 "io_timeout": 0, 00:19:21.919 "avg_latency_us": 6885.199890288432, 00:19:21.919 "min_latency_us": 6583.389090909091, 00:19:21.919 "max_latency_us": 25856.93090909091 00:19:21.919 } 00:19:21.919 ], 00:19:21.919 "core_count": 1 00:19:21.919 } 00:19:21.919 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:21.919 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:21.919 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:21.919 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:21.919 | .driver_specific 00:19:21.919 | .nvme_error 00:19:21.919 | .status_code 00:19:21.919 | .command_transient_transport_error' 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80172 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80172 ']' 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80172 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80172 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:22.177 killing process with pid 80172 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80172' 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80172 00:19:22.177 Received shutdown signal, test time was about 2.000000 seconds 00:19:22.177 00:19:22.177 Latency(us) 00:19:22.177 [2024-12-06T09:56:47.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.177 [2024-12-06T09:56:47.449Z] =================================================================================================================== 00:19:22.177 [2024-12-06T09:56:47.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.177 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80172 00:19:22.435 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80232 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80232 /var/tmp/bperf.sock 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80232 ']' 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.436 09:56:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:22.693 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:22.693 Zero copy mechanism will not be used. 00:19:22.693 [2024-12-06 09:56:47.735009] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:22.693 [2024-12-06 09:56:47.735107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80232 ] 00:19:22.693 [2024-12-06 09:56:47.877691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.693 [2024-12-06 09:56:47.916038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.951 [2024-12-06 09:56:47.984081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:22.951 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.951 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:22.951 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:22.951 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:23.209 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:23.209 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.209 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:23.209 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.209 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:23.209 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:23.466 nvme0n1 00:19:23.466 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:23.466 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.466 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:23.466 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.466 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:23.466 09:56:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:23.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:23.733 Zero copy mechanism will not be used. 00:19:23.733 Running I/O for 2 seconds... 00:19:23.733 [2024-12-06 09:56:48.794868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.794930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.794945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.798667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.798703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.798715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.802368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.802402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.802415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.806014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.806047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.806058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.809650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.809680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.809690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.813237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.813270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.813281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.816840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.816875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.816886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.820410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.820460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.820471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.824025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.824058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.824069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.827536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.827588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.827600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.831176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.831216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.831227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.834780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.834813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.834824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.838446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.838480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.838491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.842217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.842250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.842262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.845829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.845861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.845873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.849386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.849418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.849429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.853033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.853066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.853077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.856647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.856678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.856689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.860197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.733 [2024-12-06 09:56:48.860230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.733 [2024-12-06 09:56:48.860241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.733 [2024-12-06 09:56:48.863850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.863883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.863894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.867474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.867525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.867536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.871028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.871059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.871070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.874602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.874633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.874644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.878166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.878199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.878210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.881845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.881877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.881888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.885406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.885439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.885450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.889104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.889137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.889148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.892713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.892745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.892756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.896315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.896347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.896358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.899906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.899938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.899948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.903417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.903451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.903461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.907047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.907078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.907088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.910559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.910599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.910611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.914137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.914169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.914180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.917716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.917748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.917758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.921329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.921363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.921373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.924926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.924958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.924970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.928457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.928489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.928500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.932056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.932088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.932099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.935624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.935655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.935666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.939157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.939188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.939208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.942743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.942775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.942785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.946289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.946322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.946332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.949902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.949934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.949945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.953463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.953496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.953507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.734 [2024-12-06 09:56:48.957121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.734 [2024-12-06 09:56:48.957154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.734 [2024-12-06 09:56:48.957165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.960727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.960760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.960771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.964258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.964290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.964301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.967797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.967829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.967840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.971353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.971385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.971397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.974860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.974891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.974901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.978462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.978493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.978504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.982078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.982111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.982122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.985635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.985666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.985677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.989246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.989279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.989289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.992817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.992849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.992860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:48.996498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:48.996531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:48.996544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.735 [2024-12-06 09:56:49.000338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.735 [2024-12-06 09:56:49.000371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.735 [2024-12-06 09:56:49.000383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.004166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.004200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.004212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.008034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.008068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.008079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.011733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.011766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.011778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.015441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.015475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.015487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.019108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.019140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.019151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.022829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.022862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.022873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.026518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.026551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.026562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.030161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.030193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.030203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.033720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.033751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.033761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.037239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.037271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.037282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.040812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.040845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.040856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.044361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.044394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.044405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.047863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.047895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.047906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.051405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.051438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.051448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.055068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.055100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.055111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.058641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.058672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.058682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.062233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.062265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.062276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.065803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.065835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.065845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.069318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.069351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.069362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.072833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.072865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.072877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.076351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.076383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.076393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.079929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.079963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.079974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.083624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.083657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.083668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.087316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.087350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.087361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.090996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.091029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.091041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.094700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.995 [2024-12-06 09:56:49.094733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.995 [2024-12-06 09:56:49.094744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.995 [2024-12-06 09:56:49.098405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.098438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.098449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.102117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.102151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.102162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.105863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.105897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.105909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.109546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.109592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.109603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.113186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.113219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.113230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.116877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.116910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.116921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.120538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.120582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.120594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.124259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.124292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.124304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.128046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.128083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.128094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.131732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.131766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.131777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.135457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.135490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.135502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.139173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.139214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.139226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.142917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.142950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.142961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.146593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.146625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.146636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.150263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.150295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.150306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.153980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.154012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.154023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.157635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.157668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.157680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.161347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.161380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.161391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.165081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.165114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.165126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.168834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.168867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.168878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.172509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.172543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.172555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.176255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.176289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.176301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.180000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.180033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.180044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.183711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.183744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.183755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.187413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.187446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.187458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.191086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.191119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.191130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.194772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.996 [2024-12-06 09:56:49.194804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.996 [2024-12-06 09:56:49.194815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.996 [2024-12-06 09:56:49.198499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.198532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.198544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.202200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.202234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.202245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.205951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.205984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.205995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.209749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.209782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.209793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.213504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.213538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.213549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.217142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.217175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.217186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.220894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.220927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.220938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.224801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.224833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.224844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.228625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.228670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.228681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.232399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.232435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.232446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.236153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.236187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.236199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.239863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.239896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.239907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.243662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.243706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.243717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.247471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.247505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.247516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.251189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.251279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.251291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.254979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.255012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.255024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.258931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.258964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.258975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.997 [2024-12-06 09:56:49.262651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:23.997 [2024-12-06 09:56:49.262684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.997 [2024-12-06 09:56:49.262695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.266364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.266397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.266407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.270149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.270183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.270194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.273977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.274010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.274021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.277916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.277964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.277976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.281727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.281759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.281771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.285436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.285469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.285479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.289219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.289252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.289263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.293037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.293071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.293082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.296800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.296833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.296844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.300580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.300612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.300623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.304296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.304330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.304341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.308090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.308124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.308135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.311770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.311803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.311813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.315521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.315559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.315596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.319309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.319351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.319362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.323012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.323044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.323055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.326759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.326791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.326802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.330760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.330793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.330804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.334521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.334554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.334578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.338330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.338364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.338375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.342194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.342227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.342238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.345918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.345950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.345961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.349682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.349731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.257 [2024-12-06 09:56:49.349742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.257 [2024-12-06 09:56:49.353393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.257 [2024-12-06 09:56:49.353427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.353438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.357136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.357170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.357181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.360890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.360923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.360934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.364591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.364623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.364634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.368266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.368301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.368312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.372046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.372080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.372091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.375697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.375729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.375740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.379328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.379364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.379376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.383160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.383191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.383227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.386987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.387019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.387030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.390520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.390552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.390562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.394115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.394147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.394157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.397726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.397758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.397769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.401283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.401315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.401325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.404858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.404890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.404901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.408552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.408592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.408603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.412156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.412188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.412199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.415735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.415766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.415776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.419280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.419313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.419323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.422844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.422875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.422885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.426497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.426529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.426539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.430105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.430137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.430148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.433646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.433679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.433690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.437148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.437180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.437191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.440766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.440797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.440807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.444378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.444411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.444422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.447901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.447932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.258 [2024-12-06 09:56:49.447943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.258 [2024-12-06 09:56:49.451409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.258 [2024-12-06 09:56:49.451442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.451453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.454977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.455008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.455019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.458522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.458554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.458564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.462038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.462070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.462081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.465511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.465544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.465554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.469080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.469113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.469123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.472577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.472607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.472617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.476087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.476119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.476130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.479603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.479634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.479645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.483086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.483117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.483128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.486656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.486704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.486715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.490163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.490196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.490207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.493646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.493677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.493688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.497175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.497208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.497219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.500680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.500712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.500722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.504152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.504184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.504194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.507719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.507750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.507761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.511241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.511273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.511283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.514735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.514766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.514776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.518249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.518281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.518292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.521783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.521814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.521825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.259 [2024-12-06 09:56:49.525278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.259 [2024-12-06 09:56:49.525310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.259 [2024-12-06 09:56:49.525321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.528804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.528835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.528846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.532325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.532365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.532376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.535949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.535981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.535992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.539531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.539564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.539588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.543099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.543141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.543152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.546662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.546706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.546717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.550265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.550297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.550308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.553858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.553901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.553912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.557446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.557488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.557499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.561025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.561066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.561088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.564715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.564747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.564758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.568332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.568364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.568375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.571942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.571981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.571992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.575479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.575512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.575522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.579062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.579104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.579115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.582663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.582704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.582714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.586293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.586342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.586361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.589946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.589978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.589988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.593428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.593459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.593470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.596937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.596968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.596978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.600531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.600582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.600595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.604150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.604182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.604193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.607717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.607749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.519 [2024-12-06 09:56:49.607761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.519 [2024-12-06 09:56:49.611290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.519 [2024-12-06 09:56:49.611322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.611333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.614741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.614772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.614783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.618272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.618305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.618315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.621864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.621908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.621927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.625473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.625504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.625515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.628977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.629030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.629041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.632526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.632577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.632593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.636214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.636246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.636257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.639965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.640007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.640018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.643751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.643799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.643810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.647489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.647522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.647534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.651306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.651339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.651351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.654996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.655027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.655038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.658475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.658507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.658517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.662045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.662078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.662089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.665541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.665590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.665602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.669088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.669119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.669130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.672685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.672728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.672739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.676161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.676194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.676204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.679751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.679796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.679807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.683296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.683328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.683338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.686865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.686913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.686924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.690424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.690455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.690466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.693955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.693988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.693998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.697434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.520 [2024-12-06 09:56:49.697465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.520 [2024-12-06 09:56:49.697476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.520 [2024-12-06 09:56:49.700883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.700915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.700925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.704404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.704447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.704458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.707977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.708021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.708031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.711533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.711586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.711598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.715015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.715045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.715055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.718525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.718556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.718579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.722060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.722091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.722102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.725558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.725608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.725620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.729106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.729138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.729148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.732659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.732691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.732702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.736169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.736201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.736212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.739710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.739741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.739752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.743230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.743261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.743271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.746733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.746763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.746773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.750259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.750291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.750301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.753808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.753842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.753852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.757344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.757376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.757387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.760921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.760953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.760964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.764397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.764440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.764450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.767973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.768004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.768015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.771536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.771578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.771591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.775028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.775059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.775069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.778546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.778590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.778602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.521 [2024-12-06 09:56:49.782134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.782166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.782177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.521 8463.00 IOPS, 1057.88 MiB/s [2024-12-06T09:56:49.793Z] [2024-12-06 09:56:49.786433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.521 [2024-12-06 09:56:49.786474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.521 [2024-12-06 09:56:49.786495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.781 [2024-12-06 09:56:49.790057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.790089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.790099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.793536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.793578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.793590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.797008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.797040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.797051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.800518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.800551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.800562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.804080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.804112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.804122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.807635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.807673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.807683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.811114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.811146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.811157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.814606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.814638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.814649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.818100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.818132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.818142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.821578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.821619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.821630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.825064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.825095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.825106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.828527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.828578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.828596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.832188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.832220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.832231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.835866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.835898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.835909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.839563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.839618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.839640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.843233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.843265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.843277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.846920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.846953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.846963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.850550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.850605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.850617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.854216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.854262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.854280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.857959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.857992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.858011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.861609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.861651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.861662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.865316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.865350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.865361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.869016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.869048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.869059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.872653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.872682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.872693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.876424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.876457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.876469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.880160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.880193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.782 [2024-12-06 09:56:49.880204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.782 [2024-12-06 09:56:49.883937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.782 [2024-12-06 09:56:49.883970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.883982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.887554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.887596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.887608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.891203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.891246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.891258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.894890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.894922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.894933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.898527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.898560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.898583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.902149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.902181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.902192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.905792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.905824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.905835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.909486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.909518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.909530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.913181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.913213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.913224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.916834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.916867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.916878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.920490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.920523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.920534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.924127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.924159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.924171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.927759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.927791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.927802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.931461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.931494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.931506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.935225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.935283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.938932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.938964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.938975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.942536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.942578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.942591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.946233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.946267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.946277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.949857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.949890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.949901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.953488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.953519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.953531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.957157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.957190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.957201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.960800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.960832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.960843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.964448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.964480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.964492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.968086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.968120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.968131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.971796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.971829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.971840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.975457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.975490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.975501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.979132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.979164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.783 [2024-12-06 09:56:49.979175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.783 [2024-12-06 09:56:49.982817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.783 [2024-12-06 09:56:49.982849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:49.982860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:49.986463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:49.986496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:49.986507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:49.990163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:49.990195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:49.990206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:49.993845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:49.993877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:49.993888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:49.997492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:49.997524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:49.997535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.001190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.001223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.001234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.004829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.004862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.004874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.008510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.008543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.008554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.012220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.012252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.012262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.015723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.015754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.015764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.019167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.019221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.019234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.022826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.022856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.022866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.026344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.026375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.026386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.029863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.029895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.029906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.033373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.033405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.033416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.036928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.036960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.036971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.040425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.040457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.040468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.043942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.043973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.043984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.047394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.047444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.047455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.784 [2024-12-06 09:56:50.050920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:24.784 [2024-12-06 09:56:50.050951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.784 [2024-12-06 09:56:50.050961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.044 [2024-12-06 09:56:50.054444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.044 [2024-12-06 09:56:50.054486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.044 [2024-12-06 09:56:50.054497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.044 [2024-12-06 09:56:50.057949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.044 [2024-12-06 09:56:50.057982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.044 [2024-12-06 09:56:50.057993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.044 [2024-12-06 09:56:50.061439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.044 [2024-12-06 09:56:50.061471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.044 [2024-12-06 09:56:50.061482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.044 [2024-12-06 09:56:50.064936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.044 [2024-12-06 09:56:50.064968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.044 [2024-12-06 09:56:50.064978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.044 [2024-12-06 09:56:50.068449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.044 [2024-12-06 09:56:50.068481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.044 [2024-12-06 09:56:50.068492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.044 [2024-12-06 09:56:50.071961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.044 [2024-12-06 09:56:50.071992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.044 [2024-12-06 09:56:50.072004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.044 [2024-12-06 09:56:50.075451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.044 [2024-12-06 09:56:50.075483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.075494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.079011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.079041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.079052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.082572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.082614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.082625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.086064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.086096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.086107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.089598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.089636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.089647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.093130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.093161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.093172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.096647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.096678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.096689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.100162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.100194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.100205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.103714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.103745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.103756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.107210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.107250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.107261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.110832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.110862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.110874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.114359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.114390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.114401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.117863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.117896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.117906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.121369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.121400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.121411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.124933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.124964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.124975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.128457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.128489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.128501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.131997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.132029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.132039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.135609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.135639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.135650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.139083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.139113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.139124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.142605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.142636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.142647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.146138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.146169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.146180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.149645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.149673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.149686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.153141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.153173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.153184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.156622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.156653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.156664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.160163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.160195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.160206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.163688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.163718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.163729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.167166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.167205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.045 [2024-12-06 09:56:50.167224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.045 [2024-12-06 09:56:50.170727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.045 [2024-12-06 09:56:50.170757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.170768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.174226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.174258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.174269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.177790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.177822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.177833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.181312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.181344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.181355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.184866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.184898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.184909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.188445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.188477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.188487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.191985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.192017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.192028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.195468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.195499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.195510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.198944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.198976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.198987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.202471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.202501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.202512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.206020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.206052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.206062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.209561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.209611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.209621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.213116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.213148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.213159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.216659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.216690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.216701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.220093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.220125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.220136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.223651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.223682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.223693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.227134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.227165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.227176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.230682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.230713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.230724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.234222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.234254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.234265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.237729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.237760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.237770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.241299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.241331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.241342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.244826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.244859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.244869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.248391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.248423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.248433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.251906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.251937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.251948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.255422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.046 [2024-12-06 09:56:50.255455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.046 [2024-12-06 09:56:50.255466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.046 [2024-12-06 09:56:50.258938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.258969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.258980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.262455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.262487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.262497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.265990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.266021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.266032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.269472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.269504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.269515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.273163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.273196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.273207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.276768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.276799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.276811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.280377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.280409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.280421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.284164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.284207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.284218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.287928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.287970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.287981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.291644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.291675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.291686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.295175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.295223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.295238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.298940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.298982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.298996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.302674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.302705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.302716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.306344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.306378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.306388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.309894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.309937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.309948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.047 [2024-12-06 09:56:50.313391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.047 [2024-12-06 09:56:50.313431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.047 [2024-12-06 09:56:50.313442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.316963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.316995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.317005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.320435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.320467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.320478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.324049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.324081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.324092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.327549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.327591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.327602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.331009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.331039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.331049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.334486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.334518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.334529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.338039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.338071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.338082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.341516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.341557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.341578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.345040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.345082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.345102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.348558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.348598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.348609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.352422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.352453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.352464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.355920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.355952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.355963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.359391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.359423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.359433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.362928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.362959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.362970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.366431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.366463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.366473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.369933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.369965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.369975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.373582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.373613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.373624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.377048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.377079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.377089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.380613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.380644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.380654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.384421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.307 [2024-12-06 09:56:50.384454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.307 [2024-12-06 09:56:50.384465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.307 [2024-12-06 09:56:50.388212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.388254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.388265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.391958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.391990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.392001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.395664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.395700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.395711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.399473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.399506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.399517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.403224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.403258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.403268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.407009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.407042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.407053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.411054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.411087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.411098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.414995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.415029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.415041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.418850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.418883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.418894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.422619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.422649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.422660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.426395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.426428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.426439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.430100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.430132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.430143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.433817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.433849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.433860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.437491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.437523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.437534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.441138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.441171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.441181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.444866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.444900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.444911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.448679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.448711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.448722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.452415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.452448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.452459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.456152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.456186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.456198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.459941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.459975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.459986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.463739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.463772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.463783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.467440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.467473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.467484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.471190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.471247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.471267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.474922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.474955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.474966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.478633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.478665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.478676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.482247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.482280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.482307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.485949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.308 [2024-12-06 09:56:50.485982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.308 [2024-12-06 09:56:50.485993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.308 [2024-12-06 09:56:50.489681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.489712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.489723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.493340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.493382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.493393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.497002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.497034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.497045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.500748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.500780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.500791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.504371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.504404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.504415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.508143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.508177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.508188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.511851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.511883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.511895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.515565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.515608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.515619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.519259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.519292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.519303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.522983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.523014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.523025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.526712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.526743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.526755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.530416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.530458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.530470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.534205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.534238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.534249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.537948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.537980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.537991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.541427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.541458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.541469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.544924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.544955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.544966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.548430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.548462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.548472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.551948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.551979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.551990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.555427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.555459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.555469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.558938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.558970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.558980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.562481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.562512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.562523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.566044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.566076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.566086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.569490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.569521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.569532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.309 [2024-12-06 09:56:50.573019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.309 [2024-12-06 09:56:50.573051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.309 [2024-12-06 09:56:50.573062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.571 [2024-12-06 09:56:50.576538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.571 [2024-12-06 09:56:50.576579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.571 [2024-12-06 09:56:50.576591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.571 [2024-12-06 09:56:50.580039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.571 [2024-12-06 09:56:50.580070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.571 [2024-12-06 09:56:50.580081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.571 [2024-12-06 09:56:50.583522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.571 [2024-12-06 09:56:50.583555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.571 [2024-12-06 09:56:50.583576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.571 [2024-12-06 09:56:50.587010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.571 [2024-12-06 09:56:50.587041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.571 [2024-12-06 09:56:50.587051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.571 [2024-12-06 09:56:50.590534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.571 [2024-12-06 09:56:50.590576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.590589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.594058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.594090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.594101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.597610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.597644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.597655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.601164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.601206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.601226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.604679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.604711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.604722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.608179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.608211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.608222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.611681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.611712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.611723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.615189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.615254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.615265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.618774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.618803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.618813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.622306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.622337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.622347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.625874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.625906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.625917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.629399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.629431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.629442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.632874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.632905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.632915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.636399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.636430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.636441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.639910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.639941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.639951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.643404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.643436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.643447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.646890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.646920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.646931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.650489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.650530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.650540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.654039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.654080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.654101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.657622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.657670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.657680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.661238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.661280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.661291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.664927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.664960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.664971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.668695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.668745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.668756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.672381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.672414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.672425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.676118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.676150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.676161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.679696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.679727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.679737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.683203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.572 [2024-12-06 09:56:50.683246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.572 [2024-12-06 09:56:50.683257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.572 [2024-12-06 09:56:50.686738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.686768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.686779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.690284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.690317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.690328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.693809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.693841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.693852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.697332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.697364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.697375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.700887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.700919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.700929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.704418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.704450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.704461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.707913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.707945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.707956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.711460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.711492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.711504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.714937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.714974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.714985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.718421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.718465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.718475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.722020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.722059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.722070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.725610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.725640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.725650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.729189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.729229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.729240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.732717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.732756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.732770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.736343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.736377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.736388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.739883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.739916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.739926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.743426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.743459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.743470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.746987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.747017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.747028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.750470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.750503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.750513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.753988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.754020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.754031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.757530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.757561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.757584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.761006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.761038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.761049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.764543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.764596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.764620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.768074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.768106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.768117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.771624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.771667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.771678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.775129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.775159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.775170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.778673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.573 [2024-12-06 09:56:50.778704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.573 [2024-12-06 09:56:50.778716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.573 [2024-12-06 09:56:50.782226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.574 [2024-12-06 09:56:50.782258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.574 [2024-12-06 09:56:50.782268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.574 8540.50 IOPS, 1067.56 MiB/s [2024-12-06T09:56:50.846Z] [2024-12-06 09:56:50.786537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134d620) 00:19:25.574 [2024-12-06 09:56:50.786582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.574 [2024-12-06 09:56:50.786594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.574 00:19:25.574 Latency(us) 00:19:25.574 [2024-12-06T09:56:50.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:25.574 nvme0n1 : 2.00 8536.37 1067.05 0.00 0.00 1871.40 1653.29 13047.62 00:19:25.574 [2024-12-06T09:56:50.846Z] =================================================================================================================== 00:19:25.574 [2024-12-06T09:56:50.846Z] Total : 8536.37 1067.05 0.00 0.00 1871.40 1653.29 13047.62 00:19:25.574 { 00:19:25.574 "results": [ 00:19:25.574 { 00:19:25.574 "job": "nvme0n1", 00:19:25.574 "core_mask": "0x2", 00:19:25.574 "workload": "randread", 00:19:25.574 "status": "finished", 00:19:25.574 "queue_depth": 16, 00:19:25.574 "io_size": 131072, 00:19:25.574 "runtime": 2.002843, 00:19:25.574 "iops": 8536.365556361632, 00:19:25.574 "mibps": 1067.045694545204, 00:19:25.574 "io_failed": 0, 00:19:25.574 "io_timeout": 0, 00:19:25.574 "avg_latency_us": 1871.3970114905858, 00:19:25.574 "min_latency_us": 1653.2945454545454, 00:19:25.574 "max_latency_us": 13047.621818181819 00:19:25.574 } 00:19:25.574 ], 00:19:25.574 "core_count": 1 00:19:25.574 } 00:19:25.574 09:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:25.574 09:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:25.574 09:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:25.574 09:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:25.574 | .driver_specific 00:19:25.574 | .nvme_error 00:19:25.574 | .status_code 00:19:25.574 | .command_transient_transport_error' 00:19:25.839 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 552 > 0 )) 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80232 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80232 ']' 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80232 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80232 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:25.840 killing process with pid 80232 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80232' 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80232 00:19:25.840 Received shutdown signal, test time was about 2.000000 seconds 00:19:25.840 00:19:25.840 Latency(us) 00:19:25.840 [2024-12-06T09:56:51.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.840 [2024-12-06T09:56:51.112Z] =================================================================================================================== 00:19:25.840 [2024-12-06T09:56:51.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.840 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80232 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80285 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80285 /var/tmp/bperf.sock 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80285 ']' 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.098 09:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:26.357 [2024-12-06 09:56:51.392769] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:26.357 [2024-12-06 09:56:51.392860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80285 ] 00:19:26.357 [2024-12-06 09:56:51.530798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.357 [2024-12-06 09:56:51.574640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.616 [2024-12-06 09:56:51.642987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.184 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.184 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:27.184 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:27.184 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:27.443 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:27.443 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.443 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.443 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.443 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:27.443 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:27.702 nvme0n1 00:19:27.702 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:27.702 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.702 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.962 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.962 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:27.962 09:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:27.962 Running I/O for 2 seconds... 00:19:27.962 [2024-12-06 09:56:53.118880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efb048 00:19:27.962 [2024-12-06 09:56:53.120042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.120082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:27.962 [2024-12-06 09:56:53.131592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efb8b8 00:19:27.962 [2024-12-06 09:56:53.132738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.132769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.962 [2024-12-06 09:56:53.144294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efc128 00:19:27.962 [2024-12-06 09:56:53.145454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.145485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:27.962 [2024-12-06 09:56:53.157008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efc998 00:19:27.962 [2024-12-06 09:56:53.158089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.158118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:27.962 [2024-12-06 09:56:53.169593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efd208 00:19:27.962 [2024-12-06 09:56:53.170654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.170682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:27.962 [2024-12-06 09:56:53.182090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efda78 00:19:27.962 [2024-12-06 09:56:53.183133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.183162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:27.962 [2024-12-06 09:56:53.194647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efe2e8 00:19:27.962 [2024-12-06 09:56:53.195684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.195714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:27.962 [2024-12-06 09:56:53.207150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efeb58 00:19:27.962 [2024-12-06 09:56:53.208172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.208201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:27.962 [2024-12-06 09:56:53.224923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efef90 00:19:27.962 [2024-12-06 09:56:53.226948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.962 [2024-12-06 09:56:53.226978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:28.222 [2024-12-06 09:56:53.237678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efeb58 00:19:28.222 [2024-12-06 09:56:53.239758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.222 [2024-12-06 09:56:53.239789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.250961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efe2e8 00:19:28.223 [2024-12-06 09:56:53.252924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.252954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.263429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efda78 00:19:28.223 [2024-12-06 09:56:53.265406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.265435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.275992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efd208 00:19:28.223 [2024-12-06 09:56:53.278191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.278222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.289201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efc998 00:19:28.223 [2024-12-06 09:56:53.291201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.291418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.302330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efc128 00:19:28.223 [2024-12-06 09:56:53.304391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.304423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.315257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efb8b8 00:19:28.223 [2024-12-06 09:56:53.317443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.317468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.328436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efb048 00:19:28.223 [2024-12-06 09:56:53.330424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.330455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.341442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efa7d8 00:19:28.223 [2024-12-06 09:56:53.343623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.343649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.354322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef9f68 00:19:28.223 [2024-12-06 09:56:53.356337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.356369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.367203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef96f8 00:19:28.223 [2024-12-06 09:56:53.369066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.369097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.379959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef8e88 00:19:28.223 [2024-12-06 09:56:53.381759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.381790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.392701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef8618 00:19:28.223 [2024-12-06 09:56:53.394755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.394786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.405533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef7da8 00:19:28.223 [2024-12-06 09:56:53.407385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.407416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.418647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef7538 00:19:28.223 [2024-12-06 09:56:53.420453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.420484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.431540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef6cc8 00:19:28.223 [2024-12-06 09:56:53.433327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.433359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.444419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef6458 00:19:28.223 [2024-12-06 09:56:53.446421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.446455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.457286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef5be8 00:19:28.223 [2024-12-06 09:56:53.459041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.459073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.469959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef5378 00:19:28.223 [2024-12-06 09:56:53.471657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.471693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:28.223 [2024-12-06 09:56:53.482466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef4b08 00:19:28.223 [2024-12-06 09:56:53.484254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.223 [2024-12-06 09:56:53.484286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.495260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef4298 00:19:28.483 [2024-12-06 09:56:53.496970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.496997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.507778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef3a28 00:19:28.483 [2024-12-06 09:56:53.509404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.509430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.520432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef31b8 00:19:28.483 [2024-12-06 09:56:53.522161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.522188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.533200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef2948 00:19:28.483 [2024-12-06 09:56:53.534890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.534916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.546097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef20d8 00:19:28.483 [2024-12-06 09:56:53.547722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.547750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.558801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef1868 00:19:28.483 [2024-12-06 09:56:53.560377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.560404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.571583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef0ff8 00:19:28.483 [2024-12-06 09:56:53.573134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.573161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.584272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef0788 00:19:28.483 [2024-12-06 09:56:53.585901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.585927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.597076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eeff18 00:19:28.483 [2024-12-06 09:56:53.598605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.483 [2024-12-06 09:56:53.598630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:28.483 [2024-12-06 09:56:53.609835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eef6a8 00:19:28.483 [2024-12-06 09:56:53.611439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.611466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.622816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eeee38 00:19:28.484 [2024-12-06 09:56:53.624318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.624345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.635505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eee5c8 00:19:28.484 [2024-12-06 09:56:53.637005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.637030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.647976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eedd58 00:19:28.484 [2024-12-06 09:56:53.649435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.649461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.660535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eed4e8 00:19:28.484 [2024-12-06 09:56:53.661990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.662017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.672986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eecc78 00:19:28.484 [2024-12-06 09:56:53.674411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.674437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.685528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eec408 00:19:28.484 [2024-12-06 09:56:53.686999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.687026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.698084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eebb98 00:19:28.484 [2024-12-06 09:56:53.699491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.699518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.710659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eeb328 00:19:28.484 [2024-12-06 09:56:53.712106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.712133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.723292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eeaab8 00:19:28.484 [2024-12-06 09:56:53.724717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.724744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.736278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eea248 00:19:28.484 [2024-12-06 09:56:53.737901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.737928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.484 [2024-12-06 09:56:53.750053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee99d8 00:19:28.484 [2024-12-06 09:56:53.751492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.484 [2024-12-06 09:56:53.751519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:28.743 [2024-12-06 09:56:53.762777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee9168 00:19:28.743 [2024-12-06 09:56:53.764111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.743 [2024-12-06 09:56:53.764137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:28.743 [2024-12-06 09:56:53.775344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee88f8 00:19:28.743 [2024-12-06 09:56:53.776737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.743 [2024-12-06 09:56:53.776764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:28.743 [2024-12-06 09:56:53.787953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee8088 00:19:28.743 [2024-12-06 09:56:53.789249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.743 [2024-12-06 09:56:53.789275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:28.743 [2024-12-06 09:56:53.800500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee7818 00:19:28.743 [2024-12-06 09:56:53.801786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.743 [2024-12-06 09:56:53.801811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:28.743 [2024-12-06 09:56:53.813051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee6fa8 00:19:28.743 [2024-12-06 09:56:53.814312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.743 [2024-12-06 09:56:53.814339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:28.743 [2024-12-06 09:56:53.825616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee6738 00:19:28.744 [2024-12-06 09:56:53.826909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.826934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.838106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee5ec8 00:19:28.744 [2024-12-06 09:56:53.839347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.839373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.850679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee5658 00:19:28.744 [2024-12-06 09:56:53.851908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.851935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.863118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee4de8 00:19:28.744 [2024-12-06 09:56:53.864326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.864351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.875728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee4578 00:19:28.744 [2024-12-06 09:56:53.876917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.876942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.888235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee3d08 00:19:28.744 [2024-12-06 09:56:53.889404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.889430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.901382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee3498 00:19:28.744 [2024-12-06 09:56:53.902668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.902694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.914755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee2c28 00:19:28.744 [2024-12-06 09:56:53.915979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.916007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.927965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee23b8 00:19:28.744 [2024-12-06 09:56:53.929152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.929178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.941430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee1b48 00:19:28.744 [2024-12-06 09:56:53.942666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.942695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.955329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee12d8 00:19:28.744 [2024-12-06 09:56:53.956537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.956565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.969233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee0a68 00:19:28.744 [2024-12-06 09:56:53.970411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.970438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.982997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee01f8 00:19:28.744 [2024-12-06 09:56:53.984173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.984199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:53.996587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016edf988 00:19:28.744 [2024-12-06 09:56:53.997753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:53.997780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:28.744 [2024-12-06 09:56:54.009952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016edf118 00:19:28.744 [2024-12-06 09:56:54.011043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.744 [2024-12-06 09:56:54.011070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.023277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ede8a8 00:19:29.004 [2024-12-06 09:56:54.024355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.024381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.036785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ede038 00:19:29.004 [2024-12-06 09:56:54.037840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.037867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.055098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ede038 00:19:29.004 [2024-12-06 09:56:54.057071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.057097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.067693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ede8a8 00:19:29.004 [2024-12-06 09:56:54.069638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.069664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.080225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016edf118 00:19:29.004 [2024-12-06 09:56:54.082206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.082231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.092887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016edf988 00:19:29.004 [2024-12-06 09:56:54.094805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.094831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.105335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee01f8 00:19:29.004 [2024-12-06 09:56:54.108523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.108549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:29.004 19610.00 IOPS, 76.60 MiB/s [2024-12-06T09:56:54.276Z] [2024-12-06 09:56:54.119355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee0a68 00:19:29.004 [2024-12-06 09:56:54.121338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.121364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.132103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee12d8 00:19:29.004 [2024-12-06 09:56:54.134025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.134052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.144903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee1b48 00:19:29.004 [2024-12-06 09:56:54.146764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.146790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.157932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee23b8 00:19:29.004 [2024-12-06 09:56:54.159804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.159830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.170418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee2c28 00:19:29.004 [2024-12-06 09:56:54.172360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.172385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.183111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee3498 00:19:29.004 [2024-12-06 09:56:54.184981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.185006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.195721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee3d08 00:19:29.004 [2024-12-06 09:56:54.197503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.197528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.208262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee4578 00:19:29.004 [2024-12-06 09:56:54.210094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.210119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.220871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee4de8 00:19:29.004 [2024-12-06 09:56:54.222637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.222663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.233377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee5658 00:19:29.004 [2024-12-06 09:56:54.235305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.235334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.246541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee5ec8 00:19:29.004 [2024-12-06 09:56:54.248452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.248480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.259406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee6738 00:19:29.004 [2024-12-06 09:56:54.261186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.004 [2024-12-06 09:56:54.261213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:29.004 [2024-12-06 09:56:54.273026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee6fa8 00:19:29.265 [2024-12-06 09:56:54.274781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.274807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.285521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee7818 00:19:29.265 [2024-12-06 09:56:54.287241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.287269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.298177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee8088 00:19:29.265 [2024-12-06 09:56:54.299893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.299920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.310745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee88f8 00:19:29.265 [2024-12-06 09:56:54.312430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.312457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.323344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee9168 00:19:29.265 [2024-12-06 09:56:54.325092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.325117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.336022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ee99d8 00:19:29.265 [2024-12-06 09:56:54.337706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.337733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.348631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eea248 00:19:29.265 [2024-12-06 09:56:54.350232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.350258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.361192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eeaab8 00:19:29.265 [2024-12-06 09:56:54.362819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.362845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.373875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eeb328 00:19:29.265 [2024-12-06 09:56:54.375522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.375548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.386551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eebb98 00:19:29.265 [2024-12-06 09:56:54.388183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.388209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.399239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eec408 00:19:29.265 [2024-12-06 09:56:54.400820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.400846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.411838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eecc78 00:19:29.265 [2024-12-06 09:56:54.413414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.413440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.424425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eed4e8 00:19:29.265 [2024-12-06 09:56:54.426049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.426074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.437212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eedd58 00:19:29.265 [2024-12-06 09:56:54.438745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.438771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.450545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eee5c8 00:19:29.265 [2024-12-06 09:56:54.452099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.452126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.463323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eeee38 00:19:29.265 [2024-12-06 09:56:54.464857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.464884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.476004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eef6a8 00:19:29.265 [2024-12-06 09:56:54.477458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.265 [2024-12-06 09:56:54.477483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:29.265 [2024-12-06 09:56:54.488752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016eeff18 00:19:29.266 [2024-12-06 09:56:54.490187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.266 [2024-12-06 09:56:54.490213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:29.266 [2024-12-06 09:56:54.501459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef0788 00:19:29.266 [2024-12-06 09:56:54.502990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.266 [2024-12-06 09:56:54.503015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:29.266 [2024-12-06 09:56:54.514066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef0ff8 00:19:29.266 [2024-12-06 09:56:54.515484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.266 [2024-12-06 09:56:54.515511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:29.266 [2024-12-06 09:56:54.526518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef1868 00:19:29.266 [2024-12-06 09:56:54.527976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.266 [2024-12-06 09:56:54.528002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.539132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef20d8 00:19:29.526 [2024-12-06 09:56:54.540572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.540630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.551722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef2948 00:19:29.526 [2024-12-06 09:56:54.553084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.553110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.564262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef31b8 00:19:29.526 [2024-12-06 09:56:54.565634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.565661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.576820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef3a28 00:19:29.526 [2024-12-06 09:56:54.578152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.578177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.589392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef4298 00:19:29.526 [2024-12-06 09:56:54.590796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.590822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.602006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef4b08 00:19:29.526 [2024-12-06 09:56:54.603316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.603342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.614606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef5378 00:19:29.526 [2024-12-06 09:56:54.615903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.615928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.627104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef5be8 00:19:29.526 [2024-12-06 09:56:54.628411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.628437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.639752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef6458 00:19:29.526 [2024-12-06 09:56:54.641023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.641048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.652239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef6cc8 00:19:29.526 [2024-12-06 09:56:54.653478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.653503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.664927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef7538 00:19:29.526 [2024-12-06 09:56:54.666153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.666178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.677479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef7da8 00:19:29.526 [2024-12-06 09:56:54.678753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.678779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.690584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef8618 00:19:29.526 [2024-12-06 09:56:54.691789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.691815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.703092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef8e88 00:19:29.526 [2024-12-06 09:56:54.704280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.526 [2024-12-06 09:56:54.704306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:29.526 [2024-12-06 09:56:54.715723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef96f8 00:19:29.527 [2024-12-06 09:56:54.716888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.527 [2024-12-06 09:56:54.716913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:29.527 [2024-12-06 09:56:54.728230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef9f68 00:19:29.527 [2024-12-06 09:56:54.729377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.527 [2024-12-06 09:56:54.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:29.527 [2024-12-06 09:56:54.740956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efa7d8 00:19:29.527 [2024-12-06 09:56:54.742141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.527 [2024-12-06 09:56:54.742166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:29.527 [2024-12-06 09:56:54.754158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efb048 00:19:29.527 [2024-12-06 09:56:54.755285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.527 [2024-12-06 09:56:54.755313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:29.527 [2024-12-06 09:56:54.767384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efb8b8 00:19:29.527 [2024-12-06 09:56:54.768590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.527 [2024-12-06 09:56:54.768647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.527 [2024-12-06 09:56:54.780646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efc128 00:19:29.527 [2024-12-06 09:56:54.781800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.527 [2024-12-06 09:56:54.781825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:29.527 [2024-12-06 09:56:54.793312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efc998 00:19:29.527 [2024-12-06 09:56:54.794435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.527 [2024-12-06 09:56:54.794461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.805938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efd208 00:19:29.787 [2024-12-06 09:56:54.806992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.807017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.818442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efda78 00:19:29.787 [2024-12-06 09:56:54.819716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.819760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.831278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efe2e8 00:19:29.787 [2024-12-06 09:56:54.832303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.832328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.843858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efeb58 00:19:29.787 [2024-12-06 09:56:54.844866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.844891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.861608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efef90 00:19:29.787 [2024-12-06 09:56:54.863596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.863623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.874060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efeb58 00:19:29.787 [2024-12-06 09:56:54.876032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.876058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.886598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efe2e8 00:19:29.787 [2024-12-06 09:56:54.888652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.888677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.899547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efda78 00:19:29.787 [2024-12-06 09:56:54.901472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.901498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.912264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efd208 00:19:29.787 [2024-12-06 09:56:54.914274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.914300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.925123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efc998 00:19:29.787 [2024-12-06 09:56:54.927018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.927044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.937888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efc128 00:19:29.787 [2024-12-06 09:56:54.939880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.939907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.950592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efb8b8 00:19:29.787 [2024-12-06 09:56:54.952457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.952485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.963252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efb048 00:19:29.787 [2024-12-06 09:56:54.965210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.965236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.976441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016efa7d8 00:19:29.787 [2024-12-06 09:56:54.978393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.978420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:54.989789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef9f68 00:19:29.787 [2024-12-06 09:56:54.991778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:54.991806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:55.002948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef96f8 00:19:29.787 [2024-12-06 09:56:55.004766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:55.004792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:55.015569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef8e88 00:19:29.787 [2024-12-06 09:56:55.017430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:55.017457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:55.028432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef8618 00:19:29.787 [2024-12-06 09:56:55.030261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:55.030288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:55.041155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef7da8 00:19:29.787 [2024-12-06 09:56:55.042964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:55.042989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:29.787 [2024-12-06 09:56:55.054047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef7538 00:19:29.787 [2024-12-06 09:56:55.055927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.787 [2024-12-06 09:56:55.055955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:30.046 [2024-12-06 09:56:55.067551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef6cc8 00:19:30.046 [2024-12-06 09:56:55.069379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.046 [2024-12-06 09:56:55.069416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:30.046 [2024-12-06 09:56:55.080914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef6458 00:19:30.046 [2024-12-06 09:56:55.082723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.046 [2024-12-06 09:56:55.082750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:30.046 [2024-12-06 09:56:55.094139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef5be8 00:19:30.046 [2024-12-06 09:56:55.096029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.046 [2024-12-06 09:56:55.096056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:30.046 [2024-12-06 09:56:55.107516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ab70) with pdu=0x200016ef5378 00:19:30.046 19735.50 IOPS, 77.09 MiB/s [2024-12-06T09:56:55.318Z] [2024-12-06 09:56:55.109307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.046 [2024-12-06 09:56:55.109333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:30.046 00:19:30.046 Latency(us) 00:19:30.046 [2024-12-06T09:56:55.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.046 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:30.046 nvme0n1 : 2.00 19768.94 77.22 0.00 0.00 6469.30 1966.08 24188.74 00:19:30.046 [2024-12-06T09:56:55.318Z] =================================================================================================================== 00:19:30.046 [2024-12-06T09:56:55.318Z] Total : 19768.94 77.22 0.00 0.00 6469.30 1966.08 24188.74 00:19:30.046 { 00:19:30.046 "results": [ 00:19:30.046 { 00:19:30.046 "job": "nvme0n1", 00:19:30.046 "core_mask": "0x2", 00:19:30.046 "workload": "randwrite", 00:19:30.046 "status": "finished", 00:19:30.046 "queue_depth": 128, 00:19:30.046 "io_size": 4096, 00:19:30.046 "runtime": 2.003092, 00:19:30.046 "iops": 19768.93722305316, 00:19:30.046 "mibps": 77.2224110275514, 00:19:30.046 "io_failed": 0, 00:19:30.046 "io_timeout": 0, 00:19:30.046 "avg_latency_us": 6469.297188680155, 00:19:30.046 "min_latency_us": 1966.08, 00:19:30.046 "max_latency_us": 24188.741818181818 00:19:30.046 } 00:19:30.046 ], 00:19:30.046 "core_count": 1 00:19:30.046 } 00:19:30.046 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:30.046 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:30.046 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:30.046 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:30.046 | .driver_specific 00:19:30.046 | .nvme_error 00:19:30.046 | .status_code 00:19:30.046 | .command_transient_transport_error' 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80285 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80285 ']' 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80285 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80285 00:19:30.304 killing process with pid 80285 00:19:30.304 Received shutdown signal, test time was about 2.000000 seconds 00:19:30.304 00:19:30.304 Latency(us) 00:19:30.304 [2024-12-06T09:56:55.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.304 [2024-12-06T09:56:55.576Z] =================================================================================================================== 00:19:30.304 [2024-12-06T09:56:55.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80285' 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80285 00:19:30.304 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80285 00:19:30.561 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:30.561 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:30.561 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:30.561 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:30.561 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80345 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80345 /var/tmp/bperf.sock 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80345 ']' 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:30.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.562 09:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:30.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:30.562 Zero copy mechanism will not be used. 00:19:30.562 [2024-12-06 09:56:55.739742] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:30.562 [2024-12-06 09:56:55.739846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80345 ] 00:19:30.819 [2024-12-06 09:56:55.880114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.819 [2024-12-06 09:56:55.923341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.819 [2024-12-06 09:56:55.991847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:31.752 09:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.752 09:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:31.752 09:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:31.752 09:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:31.752 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:31.752 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.752 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.011 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.011 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.011 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.269 nvme0n1 00:19:32.269 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:32.269 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.269 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.269 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.269 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:32.269 09:56:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:32.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:32.269 Zero copy mechanism will not be used. 00:19:32.269 Running I/O for 2 seconds... 00:19:32.269 [2024-12-06 09:56:57.453164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.453300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.453329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.458072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.458152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.458175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.462615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.462701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.462723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.467154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.467272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.467293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.471637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.471726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.471747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.475987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.476070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.476091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.480401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.480480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.480501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.484847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.484936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.484957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.489212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.489296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.489316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.493669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.493764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.493785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.498055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.498125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.498146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.502488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.502582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.502603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.506884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.269 [2024-12-06 09:56:57.506966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.269 [2024-12-06 09:56:57.506987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.269 [2024-12-06 09:56:57.511298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.270 [2024-12-06 09:56:57.511357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.270 [2024-12-06 09:56:57.511378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.270 [2024-12-06 09:56:57.515752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.270 [2024-12-06 09:56:57.515869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.270 [2024-12-06 09:56:57.515890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.270 [2024-12-06 09:56:57.520129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.270 [2024-12-06 09:56:57.520209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.270 [2024-12-06 09:56:57.520230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.270 [2024-12-06 09:56:57.524546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.270 [2024-12-06 09:56:57.524639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.270 [2024-12-06 09:56:57.524660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.270 [2024-12-06 09:56:57.528963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.270 [2024-12-06 09:56:57.529019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.270 [2024-12-06 09:56:57.529040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.270 [2024-12-06 09:56:57.533496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.270 [2024-12-06 09:56:57.533619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.270 [2024-12-06 09:56:57.533640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.270 [2024-12-06 09:56:57.537919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.270 [2024-12-06 09:56:57.538001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.270 [2024-12-06 09:56:57.538021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.542423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.542501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.529 [2024-12-06 09:56:57.542522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.546960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.547041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.529 [2024-12-06 09:56:57.547067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.551459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.551517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.529 [2024-12-06 09:56:57.551538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.555926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.556010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.529 [2024-12-06 09:56:57.556032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.560349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.560427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.529 [2024-12-06 09:56:57.560448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.564875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.564963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.529 [2024-12-06 09:56:57.564983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.569282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.569371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.529 [2024-12-06 09:56:57.569422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.573869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.573940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.529 [2024-12-06 09:56:57.573961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.529 [2024-12-06 09:56:57.578328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.529 [2024-12-06 09:56:57.578399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.578420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.582863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.582944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.582963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.587144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.587231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.587251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.591445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.591501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.591521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.595788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.595873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.595893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.599955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.600033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.600053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.604154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.604231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.604250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.608357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.608413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.608433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.612555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.612665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.612684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.616715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.616792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.616812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.620864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.620947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.620967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.625070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.625149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.625169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.629326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.629403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.629423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.633634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.633712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.633731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.637840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.637893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.637913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.641996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.642049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.642068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.646137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.646233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.646253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.650287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.650342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.650361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.654491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.654605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.654625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.658654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.658735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.658754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.662814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.662890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.662910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.666973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.667057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.667077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.671191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.671301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.671321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.675387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.675465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.675485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.679649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.679728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.679747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.683803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.683872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.683891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.687952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.688030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.688050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.692157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.692265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.692285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.530 [2024-12-06 09:56:57.696390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.530 [2024-12-06 09:56:57.696469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.530 [2024-12-06 09:56:57.696489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.700653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.700754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.700774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.704818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.704897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.704916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.708997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.709077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.709096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.713176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.713274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.713294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.717383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.717480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.717500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.721615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.721696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.721715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.725830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.725914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.725934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.729973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.730051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.730071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.734229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.734308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.734327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.738478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.738560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.738611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.742805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.742871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.742891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.747022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.747110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.747130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.751219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.751303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.751323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.755491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.755604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.755625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.759713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.759789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.759809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.763909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.763979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.763999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.768095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.768196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.768217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.772283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.772362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.772382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.776605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.776694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.776715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.780758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.780840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.780860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.784896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.784974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.784994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.789058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.789139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.789160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.793329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.793417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.793437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.531 [2024-12-06 09:56:57.797671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.531 [2024-12-06 09:56:57.797782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.531 [2024-12-06 09:56:57.797802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.801900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.801977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.801996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.806072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.806173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.806193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.810498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.810583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.810635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.814902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.814985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.815005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.819152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.819428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.819449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.823872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.823956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.823981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.828242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.828332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.828352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.832611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.832694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.832714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.836771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.836849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.836869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.840924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.841007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.841028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.845075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.845176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.845196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.849251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.849330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.849349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.853520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.853640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.853661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.857678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.857779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.857799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.861810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.792 [2024-12-06 09:56:57.861888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.792 [2024-12-06 09:56:57.861908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.792 [2024-12-06 09:56:57.866010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.866119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.866140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.870294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.870386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.870406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.874671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.874753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.874773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.879123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.879254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.879275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.883500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.883604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.883625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.887692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.887772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.887792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.891875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.891976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.891995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.896206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.896285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.896305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.900455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.900544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.900563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.904726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.904807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.904828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.908922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.908981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.909001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.913082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.913150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.913170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.917391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.917479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.917499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.921701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.921781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.921801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.926044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.926125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.926145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.930417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.930511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.930530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.934759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.934839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.934858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.938918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.938997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.939017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.943157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.943257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.943277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.947440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.947516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.947535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.951743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.951822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.951842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.955874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.955962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.955981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.959990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.960071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.960091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.964179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.964279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.964299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.968492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.968603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.968624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.972735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.972815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.972834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.976942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.793 [2024-12-06 09:56:57.977025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.793 [2024-12-06 09:56:57.977045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.793 [2024-12-06 09:56:57.981140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:57.981230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:57.981250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:57.985430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:57.985531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:57.985550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:57.989660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:57.989743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:57.989762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:57.993903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:57.993961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:57.993980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:57.998066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:57.998145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:57.998164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.002314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.002375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.002395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.006540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.006650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.006671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.010727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.010797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.010816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.014915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.014998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.015017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.019151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.019238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.019258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.023450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.023530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.023550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.027766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.027855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.027875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.032125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.032340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.032361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.036699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.036780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.036800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.041023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.041104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.041123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.045456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.045522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.045541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.049895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.049990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.050009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.054290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.054348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.054368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:32.794 [2024-12-06 09:56:58.058723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:32.794 [2024-12-06 09:56:58.058806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-12-06 09:56:58.058826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.055 [2024-12-06 09:56:58.063141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.055 [2024-12-06 09:56:58.063270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.055 [2024-12-06 09:56:58.063291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.055 [2024-12-06 09:56:58.067555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.055 [2024-12-06 09:56:58.067642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.055 [2024-12-06 09:56:58.067661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.055 [2024-12-06 09:56:58.071788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.055 [2024-12-06 09:56:58.071859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.055 [2024-12-06 09:56:58.071879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.055 [2024-12-06 09:56:58.076018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.055 [2024-12-06 09:56:58.076109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.055 [2024-12-06 09:56:58.076129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.055 [2024-12-06 09:56:58.080250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.055 [2024-12-06 09:56:58.080330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.055 [2024-12-06 09:56:58.080350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.084543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.084651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.084673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.088771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.088852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.088871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.092982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.093062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.093082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.097221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.097301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.097322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.101547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.101647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.101668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.105826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.105903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.105922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.109974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.110062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.110082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.114123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.114205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.114225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.118378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.118466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.118485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.122630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.122709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.122729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.126799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.126887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.126907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.130982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.131067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.131087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.135251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.135326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.135346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.139486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.139568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.139618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.143749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.143832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.143852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.147937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.148026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.148045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.152097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.152198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.152217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.156278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.156356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.156375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.160552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.160668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.160688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.164824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.164904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.164924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.168953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.169055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.169075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.173127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.173212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.173231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.177314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.177403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.177423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.181612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.181701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.181721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.185775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.185847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.185867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.189934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.190018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.056 [2024-12-06 09:56:58.190037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.056 [2024-12-06 09:56:58.194132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.056 [2024-12-06 09:56:58.194221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.194241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.198407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.198508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.198529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.202676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.202759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.202778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.206834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.206912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.206932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.211065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.211162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.211182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.215365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.215447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.215467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.219698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.219776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.219796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.223842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.223921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.223940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.228022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.228101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.228120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.232257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.232336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.232356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.236501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.236623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.236644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.240729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.240828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.244906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.245017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.245036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.249025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.249134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.249154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.253277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.253356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.253375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.257538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.257663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.257684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.261766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.261845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.261865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.265952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.266041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.266061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.270150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.270229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.270249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.274376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.274455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.274474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.278703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.278782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.278801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.282907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.282990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.283010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.287124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.287216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.287236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.291377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.291446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.291466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.295623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.295707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.295727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.299839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.299923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.299942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.304025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.304108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.304128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.057 [2024-12-06 09:56:58.308255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.057 [2024-12-06 09:56:58.308344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.057 [2024-12-06 09:56:58.308364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.058 [2024-12-06 09:56:58.312514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.058 [2024-12-06 09:56:58.312631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.058 [2024-12-06 09:56:58.312651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.058 [2024-12-06 09:56:58.316804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.058 [2024-12-06 09:56:58.316886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.058 [2024-12-06 09:56:58.316907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.058 [2024-12-06 09:56:58.321010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.058 [2024-12-06 09:56:58.321092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.058 [2024-12-06 09:56:58.321112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.325224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.325308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.325327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.329513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.329628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.329649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.333786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.333866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.333885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.337946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.338055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.338075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.342154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.342230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.342250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.346462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.346543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.346562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.350695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.350800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.350819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.354900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.354980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.354999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.359086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.359166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.359185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.363300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.363384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.363404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.367585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.367670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.367690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.371834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.371930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.371949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.376069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.376158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.376177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.380317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.380396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.380415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.319 [2024-12-06 09:56:58.384681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.319 [2024-12-06 09:56:58.384762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.319 [2024-12-06 09:56:58.384783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.388951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.389030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.389049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.393195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.393284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.393304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.397597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.397677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.397696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.401837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.401895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.401915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.406062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.406294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.406315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.410610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.410688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.410708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.415000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.415100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.415120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.419469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.419555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.419605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.423869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.423939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.423958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.428151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.428241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.428261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.432469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.432548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.432578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.436832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.436911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.436930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.441007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.441096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.441116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.445287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.445366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.445386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.449706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.449784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.449804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.320 7239.00 IOPS, 904.88 MiB/s [2024-12-06T09:56:58.592Z] [2024-12-06 09:56:58.454915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.454998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.455018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.459226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.459300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.459322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.463610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.463695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.463715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.467890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.320 [2024-12-06 09:56:58.467960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.320 [2024-12-06 09:56:58.467980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.320 [2024-12-06 09:56:58.472317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.472396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.472415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.476751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.476820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.476840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.481035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.481257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.481278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.485641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.485717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.485737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.489956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.490039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.490060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.494317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.494400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.494420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.498676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.498756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.498776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.502900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.502979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.502999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.507209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.507284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.507304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.511507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.511611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.511633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.515824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.515914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.515933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.520052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.520152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.520173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.524284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.524384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.524403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.528622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.528706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.528725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.532865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.532945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.532964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.537080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.537299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.537319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.541615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.541704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.541724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.546175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.546265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.546285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.550375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.550456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.550475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.554669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.554745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.554765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.558826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.321 [2024-12-06 09:56:58.558910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.321 [2024-12-06 09:56:58.558930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.321 [2024-12-06 09:56:58.562979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.322 [2024-12-06 09:56:58.563062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.322 [2024-12-06 09:56:58.563082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.322 [2024-12-06 09:56:58.567218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.322 [2024-12-06 09:56:58.567297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.322 [2024-12-06 09:56:58.567317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.322 [2024-12-06 09:56:58.571489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.322 [2024-12-06 09:56:58.571586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.322 [2024-12-06 09:56:58.571640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.322 [2024-12-06 09:56:58.575741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.322 [2024-12-06 09:56:58.575835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.322 [2024-12-06 09:56:58.575854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.322 [2024-12-06 09:56:58.579905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.322 [2024-12-06 09:56:58.579985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.322 [2024-12-06 09:56:58.580004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.322 [2024-12-06 09:56:58.584025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.322 [2024-12-06 09:56:58.584103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.322 [2024-12-06 09:56:58.584122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.588497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.588578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.588628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.593011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.593206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.593226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.597887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.598090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.598324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.602596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.602819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.602970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.607156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.607407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.607577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.612001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.612233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.612374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.616673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.616897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.617041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.621294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.621531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.621802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.625924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.626156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.626298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.630532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.630801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.630924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.635214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.635303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.635325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.639598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.639685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.639706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.644016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.582 [2024-12-06 09:56:58.644120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.582 [2024-12-06 09:56:58.644141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.582 [2024-12-06 09:56:58.648386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.648496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.648518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.652814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.652895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.652915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.657140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.657220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.657240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.661528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.661628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.661649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.665931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.666016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.666037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.670300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.670371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.670391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.674762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.674833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.674854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.679192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.679415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.679436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.683878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.683964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.683986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.688276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.688359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.692728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.692801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.692822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.697166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.697248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.697269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.701627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.701727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.701748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.705991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.706072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.706092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.710416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.710502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.710522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.714902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.714963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.714984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.719442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.719705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.719726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.724152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.724231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.724251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.728615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.728702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.728722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.733074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.733167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.733187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.737480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.737556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.737608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.741950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.742021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.742042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.746426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.746506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.746528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.750925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.751149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.751170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.755611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.755691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.755712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.759902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.759982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.760001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.764117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.764195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.583 [2024-12-06 09:56:58.764215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.583 [2024-12-06 09:56:58.768423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.583 [2024-12-06 09:56:58.768512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.768531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.772702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.772806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.772825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.776898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.776975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.776996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.781152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.781241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.781261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.785463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.785543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.785563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.789804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.789882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.789903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.794080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.794317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.794339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.798727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.798809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.798828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.802979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.803059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.803080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.807281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.807357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.807378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.811644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.811723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.811743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.815846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.815924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.815944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.820250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.820329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.820349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.824593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.824676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.824695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.828793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.828876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.828896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.833111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.833320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.833341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.837716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.837814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.837834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.842014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.842094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.842114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.846426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.846530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.846550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.584 [2024-12-06 09:56:58.850934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.584 [2024-12-06 09:56:58.851023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.584 [2024-12-06 09:56:58.851043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.844 [2024-12-06 09:56:58.855269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.855351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.855371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.859693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.859775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.859795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.863904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.863993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.864013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.868173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.868251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.868272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.872470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.872549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.872580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.876836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.876924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.876944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.881108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.881320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.881341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.885816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.885899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.885919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.890091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.890175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.890195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.894386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.894465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.894485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.898767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.898850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.898870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.902994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.903072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.903092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.907254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.907353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.907374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.911558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.911652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.911672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.915763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.915840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.915860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.920089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.920167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.920186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.924651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.924729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.924749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.928930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.928991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.929010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.933177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.933274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.933294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.937541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.937638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.937658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.941910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.941963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.941983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.946131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.946187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.946207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.950419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.950504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.950523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.954739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.954792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.954811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.959049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.959112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.959133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.963551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.845 [2024-12-06 09:56:58.963619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.845 [2024-12-06 09:56:58.963640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.845 [2024-12-06 09:56:58.967929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:58.967982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:58.968001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:58.972157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:58.972213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:58.972233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:58.976496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:58.976551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:58.976582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:58.980765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:58.980844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:58.980864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:58.984993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:58.985071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:58.985090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:58.989327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:58.989425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:58.989444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:58.993632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:58.993732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:58.993752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:58.997865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:58.997940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:58.997960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.002113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.002192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.002212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.006373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.006450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.006469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.010693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.010768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.010788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.015032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.015097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.015118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.019430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.019502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.019522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.023878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.023971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.023990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.028093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.028171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.028191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.032373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.032471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.032490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.036714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.036791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.036810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.040941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.041020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.041039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.045104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.045160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.045179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.049445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.049504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.049524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.053721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.053799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.053819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.058071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.058148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.058168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.062594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.062690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.062710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.067095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.067171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.067190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.071555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.071665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.071686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.075966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.076050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.076069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.080398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.080451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.846 [2024-12-06 09:56:59.080471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.846 [2024-12-06 09:56:59.084811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.846 [2024-12-06 09:56:59.084867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.847 [2024-12-06 09:56:59.084887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.847 [2024-12-06 09:56:59.089231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.847 [2024-12-06 09:56:59.089332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.847 [2024-12-06 09:56:59.089352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.847 [2024-12-06 09:56:59.093696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.847 [2024-12-06 09:56:59.093805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.847 [2024-12-06 09:56:59.093826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.847 [2024-12-06 09:56:59.098130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.847 [2024-12-06 09:56:59.098213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.847 [2024-12-06 09:56:59.098232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.847 [2024-12-06 09:56:59.102511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.847 [2024-12-06 09:56:59.102623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.847 [2024-12-06 09:56:59.102643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.847 [2024-12-06 09:56:59.106921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.847 [2024-12-06 09:56:59.107028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.847 [2024-12-06 09:56:59.107048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.847 [2024-12-06 09:56:59.111108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:33.847 [2024-12-06 09:56:59.111213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.847 [2024-12-06 09:56:59.111232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.115328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.115412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.115432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.119764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.119820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.119840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.123904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.123988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.124007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.128029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.128106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.128126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.132256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.132335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.132355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.136532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.136621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.136641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.140751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.140836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.140855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.144879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.144954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.144974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.149208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.149290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.149309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.153475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.153553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.153587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.157628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.157707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.157726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.161795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.161873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.161893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.166018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.166072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.166092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.170202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.170255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.170275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.174430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.108 [2024-12-06 09:56:59.174507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.108 [2024-12-06 09:56:59.174527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.108 [2024-12-06 09:56:59.178675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.178766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.178786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.182884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.182937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.182957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.187038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.187115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.187135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.191270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.191354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.191375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.195604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.195681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.195701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.199887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.199970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.199990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.204009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.204062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.204082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.208150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.208246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.208266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.212319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.212396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.212416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.216595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.216675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.216695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.220860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.220941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.220960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.224988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.225041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.225061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.229219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.229321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.229341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.233470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.233546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.233580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.237776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.237854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.237873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.242015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.242116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.242136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.246359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.246435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.246454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.250775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.250852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.250873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.255059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.255143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.255162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.259230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.259332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.259351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.263477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.263560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.263595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.267620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.267672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.267692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.271789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.271867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.271886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.275913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.275996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.276015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.280047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.280125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.280144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.284234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.284317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.284337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.288445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.109 [2024-12-06 09:56:59.288520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.109 [2024-12-06 09:56:59.288539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.109 [2024-12-06 09:56:59.292686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.292770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.292789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.296866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.296962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.296982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.301107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.301160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.301180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.305276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.305355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.305375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.309525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.309590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.309610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.313669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.313753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.313773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.317808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.317908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.317928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.322030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.322113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.322133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.326195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.326272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.326291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.330395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.330470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.330489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.334824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.334902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.334922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.339265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.339347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.339367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.343573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.343641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.343660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.347796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.347850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.347870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.351989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.352067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.352087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.356204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.356303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.356323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.360431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.360509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.360528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.364734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.364812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.364831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.368950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.369028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.369048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.110 [2024-12-06 09:56:59.373145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.110 [2024-12-06 09:56:59.373241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.110 [2024-12-06 09:56:59.373260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.377375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.377427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.370 [2024-12-06 09:56:59.377447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.381665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.381748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.370 [2024-12-06 09:56:59.381768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.385834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.385887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.370 [2024-12-06 09:56:59.385907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.390024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.390094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.370 [2024-12-06 09:56:59.390113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.394327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.394410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.370 [2024-12-06 09:56:59.394430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.398660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.398744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.370 [2024-12-06 09:56:59.398763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.402833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.402915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.370 [2024-12-06 09:56:59.402936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.406934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.407045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.370 [2024-12-06 09:56:59.407065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.370 [2024-12-06 09:56:59.411117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.370 [2024-12-06 09:56:59.411204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.411224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.415334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.415412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.415431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.419643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.419698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.419717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.423735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.423866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.423887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.427992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.428076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.428096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.432193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.432278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.432297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.436506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.436586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.436617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.440646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.440742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.440762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.444775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.444852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.444871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.371 [2024-12-06 09:56:59.448922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.449000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.449020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:34.371 7204.50 IOPS, 900.56 MiB/s [2024-12-06T09:56:59.643Z] [2024-12-06 09:56:59.453755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x259ad10) with pdu=0x200016eff3c8 00:19:34.371 [2024-12-06 09:56:59.453831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.371 [2024-12-06 09:56:59.453851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.371 00:19:34.371 Latency(us) 00:19:34.371 [2024-12-06T09:56:59.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.371 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:34.371 nvme0n1 : 2.00 7202.14 900.27 0.00 0.00 2216.71 1467.11 5272.67 00:19:34.371 [2024-12-06T09:56:59.643Z] =================================================================================================================== 00:19:34.371 [2024-12-06T09:56:59.643Z] Total : 7202.14 900.27 0.00 0.00 2216.71 1467.11 5272.67 00:19:34.371 { 00:19:34.371 "results": [ 00:19:34.371 { 00:19:34.371 "job": "nvme0n1", 00:19:34.371 "core_mask": "0x2", 00:19:34.371 "workload": "randwrite", 00:19:34.371 "status": "finished", 00:19:34.371 "queue_depth": 16, 00:19:34.371 "io_size": 131072, 00:19:34.371 "runtime": 2.002876, 00:19:34.371 "iops": 7202.143317908847, 00:19:34.371 "mibps": 900.2679147386059, 00:19:34.371 "io_failed": 0, 00:19:34.371 "io_timeout": 0, 00:19:34.371 "avg_latency_us": 2216.7058956987553, 00:19:34.371 "min_latency_us": 1467.1127272727272, 00:19:34.371 "max_latency_us": 5272.669090909091 00:19:34.371 } 00:19:34.371 ], 00:19:34.371 "core_count": 1 00:19:34.371 } 00:19:34.371 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:34.371 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:34.371 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:34.371 | .driver_specific 00:19:34.371 | .nvme_error 00:19:34.371 | .status_code 00:19:34.371 | .command_transient_transport_error' 00:19:34.371 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 466 > 0 )) 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80345 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80345 ']' 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80345 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80345 00:19:34.631 killing process with pid 80345 00:19:34.631 Received shutdown signal, test time was about 2.000000 seconds 00:19:34.631 00:19:34.631 Latency(us) 00:19:34.631 [2024-12-06T09:56:59.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.631 [2024-12-06T09:56:59.903Z] =================================================================================================================== 00:19:34.631 [2024-12-06T09:56:59.903Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80345' 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80345 00:19:34.631 09:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80345 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80140 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80140 ']' 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80140 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80140 00:19:34.890 killing process with pid 80140 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80140' 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80140 00:19:34.890 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80140 00:19:35.149 ************************************ 00:19:35.149 END TEST nvmf_digest_error 00:19:35.149 ************************************ 00:19:35.149 00:19:35.149 real 0m18.071s 00:19:35.149 user 0m34.906s 00:19:35.149 sys 0m4.740s 00:19:35.149 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.149 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:35.149 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:35.149 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:35.149 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.149 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:19:35.149 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.408 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:19:35.408 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.408 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.409 rmmod nvme_tcp 00:19:35.409 rmmod nvme_fabrics 00:19:35.409 rmmod nvme_keyring 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80140 ']' 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80140 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80140 ']' 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80140 00:19:35.409 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80140) - No such process 00:19:35.409 Process with pid 80140 is not found 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80140 is not found' 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:35.409 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:19:35.668 00:19:35.668 real 0m37.024s 00:19:35.668 user 1m9.363s 00:19:35.668 sys 0m10.021s 00:19:35.668 ************************************ 00:19:35.668 END TEST nvmf_digest 00:19:35.668 ************************************ 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.668 ************************************ 00:19:35.668 START TEST nvmf_host_multipath 00:19:35.668 ************************************ 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:35.668 * Looking for test storage... 00:19:35.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:35.668 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:35.669 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:35.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.929 --rc genhtml_branch_coverage=1 00:19:35.929 --rc genhtml_function_coverage=1 00:19:35.929 --rc genhtml_legend=1 00:19:35.929 --rc geninfo_all_blocks=1 00:19:35.929 --rc geninfo_unexecuted_blocks=1 00:19:35.929 00:19:35.929 ' 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:35.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.929 --rc genhtml_branch_coverage=1 00:19:35.929 --rc genhtml_function_coverage=1 00:19:35.929 --rc genhtml_legend=1 00:19:35.929 --rc geninfo_all_blocks=1 00:19:35.929 --rc geninfo_unexecuted_blocks=1 00:19:35.929 00:19:35.929 ' 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:35.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.929 --rc genhtml_branch_coverage=1 00:19:35.929 --rc genhtml_function_coverage=1 00:19:35.929 --rc genhtml_legend=1 00:19:35.929 --rc geninfo_all_blocks=1 00:19:35.929 --rc geninfo_unexecuted_blocks=1 00:19:35.929 00:19:35.929 ' 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:35.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.929 --rc genhtml_branch_coverage=1 00:19:35.929 --rc genhtml_function_coverage=1 00:19:35.929 --rc genhtml_legend=1 00:19:35.929 --rc geninfo_all_blocks=1 00:19:35.929 --rc geninfo_unexecuted_blocks=1 00:19:35.929 00:19:35.929 ' 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.929 09:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:35.929 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.930 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:35.930 Cannot find device "nvmf_init_br" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:35.930 Cannot find device "nvmf_init_br2" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:35.930 Cannot find device "nvmf_tgt_br" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.930 Cannot find device "nvmf_tgt_br2" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:35.930 Cannot find device "nvmf_init_br" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:35.930 Cannot find device "nvmf_init_br2" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:35.930 Cannot find device "nvmf_tgt_br" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:35.930 Cannot find device "nvmf_tgt_br2" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:35.930 Cannot find device "nvmf_br" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:35.930 Cannot find device "nvmf_init_if" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:35.930 Cannot find device "nvmf_init_if2" 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:35.930 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:36.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:36.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:19:36.190 00:19:36.190 --- 10.0.0.3 ping statistics --- 00:19:36.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.190 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:36.190 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:36.190 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:36.190 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:19:36.190 00:19:36.191 --- 10.0.0.4 ping statistics --- 00:19:36.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.191 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:36.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:36.191 00:19:36.191 --- 10.0.0.1 ping statistics --- 00:19:36.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.191 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:36.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:36.191 00:19:36.191 --- 10.0.0.2 ping statistics --- 00:19:36.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.191 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80660 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80660 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80660 ']' 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.191 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:36.191 [2024-12-06 09:57:01.456708] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:36.191 [2024-12-06 09:57:01.456790] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.451 [2024-12-06 09:57:01.610430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:36.451 [2024-12-06 09:57:01.668496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.451 [2024-12-06 09:57:01.668818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.451 [2024-12-06 09:57:01.668921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.451 [2024-12-06 09:57:01.669020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.451 [2024-12-06 09:57:01.669133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.451 [2024-12-06 09:57:01.670491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.451 [2024-12-06 09:57:01.670508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.710 [2024-12-06 09:57:01.728469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.710 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.710 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:36.710 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.710 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.710 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:36.710 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.710 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80660 00:19:36.710 09:57:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:36.968 [2024-12-06 09:57:02.139934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.968 09:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:37.226 Malloc0 00:19:37.226 09:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:37.794 09:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:37.794 09:57:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:38.053 [2024-12-06 09:57:03.217395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:38.053 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:38.312 [2024-12-06 09:57:03.453714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:38.312 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80708 00:19:38.312 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:38.312 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.312 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80708 /var/tmp/bdevperf.sock 00:19:38.312 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80708 ']' 00:19:38.312 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.312 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.313 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.313 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.313 09:57:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:39.249 09:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.249 09:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:39.249 09:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:39.509 09:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:40.082 Nvme0n1 00:19:40.082 09:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:40.341 Nvme0n1 00:19:40.341 09:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:40.341 09:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:41.277 09:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:41.277 09:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:41.564 09:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:41.825 09:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:41.825 09:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80660 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:41.825 09:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80753 00:19:41.825 09:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:48.389 09:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:48.389 09:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:48.389 Attaching 4 probes... 00:19:48.389 @path[10.0.0.3, 4421]: 16296 00:19:48.389 @path[10.0.0.3, 4421]: 21008 00:19:48.389 @path[10.0.0.3, 4421]: 21031 00:19:48.389 @path[10.0.0.3, 4421]: 21431 00:19:48.389 @path[10.0.0.3, 4421]: 21260 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:48.389 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80753 00:19:48.390 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:48.390 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:48.390 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:48.390 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:48.648 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:48.648 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80872 00:19:48.648 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:48.648 09:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80660 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:55.214 09:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:55.214 09:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:55.214 Attaching 4 probes... 00:19:55.214 @path[10.0.0.3, 4420]: 19894 00:19:55.214 @path[10.0.0.3, 4420]: 20419 00:19:55.214 @path[10.0.0.3, 4420]: 20214 00:19:55.214 @path[10.0.0.3, 4420]: 20231 00:19:55.214 @path[10.0.0.3, 4420]: 20173 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80872 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:55.214 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:55.473 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:55.473 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80990 00:19:55.473 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80660 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:55.473 09:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:02.038 Attaching 4 probes... 00:20:02.038 @path[10.0.0.3, 4421]: 15954 00:20:02.038 @path[10.0.0.3, 4421]: 20806 00:20:02.038 @path[10.0.0.3, 4421]: 21759 00:20:02.038 @path[10.0.0.3, 4421]: 21746 00:20:02.038 @path[10.0.0.3, 4421]: 21551 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80990 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:02.038 09:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:02.038 09:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:02.296 09:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:02.296 09:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81097 00:20:02.296 09:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80660 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:02.296 09:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:08.859 Attaching 4 probes... 00:20:08.859 00:20:08.859 00:20:08.859 00:20:08.859 00:20:08.859 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81097 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:08.859 09:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:08.859 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:09.425 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:09.425 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80660 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:09.425 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81215 00:20:09.425 09:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:16.050 Attaching 4 probes... 00:20:16.050 @path[10.0.0.3, 4421]: 20843 00:20:16.050 @path[10.0.0.3, 4421]: 21462 00:20:16.050 @path[10.0.0.3, 4421]: 21450 00:20:16.050 @path[10.0.0.3, 4421]: 21577 00:20:16.050 @path[10.0.0.3, 4421]: 21538 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81215 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:16.050 09:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:16.988 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:16.988 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81333 00:20:16.988 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80660 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:16.988 09:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:23.557 09:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:23.557 09:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:23.557 Attaching 4 probes... 00:20:23.557 @path[10.0.0.3, 4420]: 19765 00:20:23.557 @path[10.0.0.3, 4420]: 20047 00:20:23.557 @path[10.0.0.3, 4420]: 20009 00:20:23.557 @path[10.0.0.3, 4420]: 16385 00:20:23.557 @path[10.0.0.3, 4420]: 15253 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81333 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:23.557 [2024-12-06 09:57:48.517726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:23.557 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:23.815 09:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:30.379 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:30.379 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81513 00:20:30.379 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80660 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:30.379 09:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:35.642 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:35.642 09:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:35.901 Attaching 4 probes... 00:20:35.901 @path[10.0.0.3, 4421]: 18884 00:20:35.901 @path[10.0.0.3, 4421]: 19177 00:20:35.901 @path[10.0.0.3, 4421]: 19088 00:20:35.901 @path[10.0.0.3, 4421]: 19175 00:20:35.901 @path[10.0.0.3, 4421]: 19267 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81513 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80708 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80708 ']' 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80708 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.901 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80708 00:20:36.160 killing process with pid 80708 00:20:36.160 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:36.160 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:36.160 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80708' 00:20:36.160 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80708 00:20:36.160 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80708 00:20:36.160 { 00:20:36.160 "results": [ 00:20:36.160 { 00:20:36.160 "job": "Nvme0n1", 00:20:36.160 "core_mask": "0x4", 00:20:36.160 "workload": "verify", 00:20:36.160 "status": "terminated", 00:20:36.160 "verify_range": { 00:20:36.160 "start": 0, 00:20:36.160 "length": 16384 00:20:36.160 }, 00:20:36.160 "queue_depth": 128, 00:20:36.160 "io_size": 4096, 00:20:36.160 "runtime": 55.677606, 00:20:36.160 "iops": 8417.459615630743, 00:20:36.160 "mibps": 32.88070162355759, 00:20:36.160 "io_failed": 0, 00:20:36.160 "io_timeout": 0, 00:20:36.160 "avg_latency_us": 15177.233015472995, 00:20:36.160 "min_latency_us": 867.6072727272727, 00:20:36.160 "max_latency_us": 7015926.69090909 00:20:36.160 } 00:20:36.160 ], 00:20:36.160 "core_count": 1 00:20:36.160 } 00:20:36.434 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80708 00:20:36.434 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:36.434 [2024-12-06 09:57:03.519811] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:20:36.434 [2024-12-06 09:57:03.519910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80708 ] 00:20:36.434 [2024-12-06 09:57:03.654477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.434 [2024-12-06 09:57:03.713783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.434 [2024-12-06 09:57:03.770242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:36.434 Running I/O for 90 seconds... 00:20:36.434 7318.00 IOPS, 28.59 MiB/s [2024-12-06T09:58:01.706Z] 7395.50 IOPS, 28.89 MiB/s [2024-12-06T09:58:01.706Z] 7643.67 IOPS, 29.86 MiB/s [2024-12-06T09:58:01.706Z] 8348.75 IOPS, 32.61 MiB/s [2024-12-06T09:58:01.706Z] 8781.40 IOPS, 34.30 MiB/s [2024-12-06T09:58:01.706Z] 9102.50 IOPS, 35.56 MiB/s [2024-12-06T09:58:01.706Z] 9321.86 IOPS, 36.41 MiB/s [2024-12-06T09:58:01.706Z] 9490.38 IOPS, 37.07 MiB/s [2024-12-06T09:58:01.706Z] [2024-12-06 09:57:13.798093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.434 [2024-12-06 09:57:13.798146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:36.434 [2024-12-06 09:57:13.798211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.434 [2024-12-06 09:57:13.798231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:36.434 [2024-12-06 09:57:13.798253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.434 [2024-12-06 09:57:13.798267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.435 [2024-12-06 09:57:13.798736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.798768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.798806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.798839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.798872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.798904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.798945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.798979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.798998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-12-06 09:57:13.799302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:36.435 [2024-12-06 09:57:13.799506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.799531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.799571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.799663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.799697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.799731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.799764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.799798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.799831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.436 [2024-12-06 09:57:13.799865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.436 [2024-12-06 09:57:13.799898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.436 [2024-12-06 09:57:13.799932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.436 [2024-12-06 09:57:13.799980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.799999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.436 [2024-12-06 09:57:13.800012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.436 [2024-12-06 09:57:13.800045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.436 [2024-12-06 09:57:13.800085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.436 [2024-12-06 09:57:13.800118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.436 [2024-12-06 09:57:13.800871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.436 [2024-12-06 09:57:13.800885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.800904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.800925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.800945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.800959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.800978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.800992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.801790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.801978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.801991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.802010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.802029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.803284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.437 [2024-12-06 09:57:13.803315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.803343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.803360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.803380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.803396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.803416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.803431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:36.437 [2024-12-06 09:57:13.803463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.437 [2024-12-06 09:57:13.803478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.803498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.803513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.803533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.803562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.803582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.803596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.803848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.803873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.803897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.803917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.803938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.803952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.803971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.803985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:13.804291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:13.804305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:36.438 9557.78 IOPS, 37.34 MiB/s [2024-12-06T09:58:01.710Z] 9611.60 IOPS, 37.55 MiB/s [2024-12-06T09:58:01.710Z] 9658.55 IOPS, 37.73 MiB/s [2024-12-06T09:58:01.710Z] 9703.67 IOPS, 37.90 MiB/s [2024-12-06T09:58:01.710Z] 9733.85 IOPS, 38.02 MiB/s [2024-12-06T09:58:01.710Z] 9758.00 IOPS, 38.12 MiB/s [2024-12-06T09:58:01.710Z] [2024-12-06 09:57:20.380824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.380878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.438 [2024-12-06 09:57:20.381338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.438 [2024-12-06 09:57:20.381369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.438 [2024-12-06 09:57:20.381400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.438 [2024-12-06 09:57:20.381432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.438 [2024-12-06 09:57:20.381479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.438 [2024-12-06 09:57:20.381510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.438 [2024-12-06 09:57:20.381542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.438 [2024-12-06 09:57:20.381574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.438 [2024-12-06 09:57:20.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.438 [2024-12-06 09:57:20.381789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.381802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.381821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.381834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.381868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.381881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.381899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.381912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.381930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.381943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.381962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.381974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.381993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.439 [2024-12-06 09:57:20.382170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.439 [2024-12-06 09:57:20.382203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.439 [2024-12-06 09:57:20.382235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.439 [2024-12-06 09:57:20.382266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.439 [2024-12-06 09:57:20.382297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.439 [2024-12-06 09:57:20.382329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.439 [2024-12-06 09:57:20.382359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.439 [2024-12-06 09:57:20.382391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.439 [2024-12-06 09:57:20.382897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.439 [2024-12-06 09:57:20.382910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.382928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.382942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.382961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.382990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.440 [2024-12-06 09:57:20.383913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.383955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.383974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.440 [2024-12-06 09:57:20.384339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:36.440 [2024-12-06 09:57:20.384358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.384378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.384398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.384412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.384431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.384444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.384462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.384476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.384494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.384507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.384531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.384545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.384563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.384577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.385241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:20.385934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.385972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.385997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:36.441 [2024-12-06 09:57:20.386542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.441 [2024-12-06 09:57:20.386555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:36.441 9693.07 IOPS, 37.86 MiB/s [2024-12-06T09:58:01.713Z] 9167.31 IOPS, 35.81 MiB/s [2024-12-06T09:58:01.713Z] 9246.35 IOPS, 36.12 MiB/s [2024-12-06T09:58:01.713Z] 9320.06 IOPS, 36.41 MiB/s [2024-12-06T09:58:01.713Z] 9403.00 IOPS, 36.73 MiB/s [2024-12-06T09:58:01.713Z] 9472.35 IOPS, 37.00 MiB/s [2024-12-06T09:58:01.713Z] 9537.57 IOPS, 37.26 MiB/s [2024-12-06T09:58:01.713Z] 9599.50 IOPS, 37.50 MiB/s [2024-12-06T09:58:01.713Z] [2024-12-06 09:57:27.531005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.441 [2024-12-06 09:57:27.531055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.531415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.531447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.531479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.531511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.531581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.531623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.531677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.531708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.531977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.531999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.532030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.532061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.532091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.532121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.532152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.532183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.442 [2024-12-06 09:57:27.532213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.532244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.532275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.532306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.532337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.532373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.532405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:36.442 [2024-12-06 09:57:27.532423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.442 [2024-12-06 09:57:27.532436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.532466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.532981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.532993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.443 [2024-12-06 09:57:27.533024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.443 [2024-12-06 09:57:27.533706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:36.443 [2024-12-06 09:57:27.533725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.533739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.533758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.533771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.533790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.533803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.533822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.533835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.533858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.533872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.533891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.533906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.533925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.533939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.533958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.533985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.534016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.534054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.534086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.534117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.534630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.534643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.535329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.444 [2024-12-06 09:57:27.535357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.535388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.535404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.535430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.535444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.535470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.535484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.535510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.535524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.535563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.535577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:36.444 [2024-12-06 09:57:27.535602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.444 [2024-12-06 09:57:27.535631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.535686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.535701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.535742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.535761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.535786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.535800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.535825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.535839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.535863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.535876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.535900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.535913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.535937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.535951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.535974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.535988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:27.536374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:27.536387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:36.445 9191.70 IOPS, 35.91 MiB/s [2024-12-06T09:58:01.717Z] 8808.71 IOPS, 34.41 MiB/s [2024-12-06T09:58:01.717Z] 8456.36 IOPS, 33.03 MiB/s [2024-12-06T09:58:01.717Z] 8131.12 IOPS, 31.76 MiB/s [2024-12-06T09:58:01.717Z] 7829.96 IOPS, 30.59 MiB/s [2024-12-06T09:58:01.717Z] 7550.32 IOPS, 29.49 MiB/s [2024-12-06T09:58:01.717Z] 7289.97 IOPS, 28.48 MiB/s [2024-12-06T09:58:01.717Z] 7388.47 IOPS, 28.86 MiB/s [2024-12-06T09:58:01.717Z] 7494.52 IOPS, 29.28 MiB/s [2024-12-06T09:58:01.717Z] 7595.06 IOPS, 29.67 MiB/s [2024-12-06T09:58:01.717Z] 7690.73 IOPS, 30.04 MiB/s [2024-12-06T09:58:01.717Z] 7782.41 IOPS, 30.40 MiB/s [2024-12-06T09:58:01.717Z] 7867.26 IOPS, 30.73 MiB/s [2024-12-06T09:58:01.717Z] [2024-12-06 09:57:40.928688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.928744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.928809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.928834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.928855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.928869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.928889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.928903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.928922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.928935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.928991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:36.445 [2024-12-06 09:57:40.929367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.445 [2024-12-06 09:57:40.929382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.929414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.929445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.929476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.929507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.929554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.929585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.929650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.929683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.929715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.929748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.929780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.929820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.929854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.929887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.929955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.929989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.930017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.930042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.930067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.930092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.930116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.930140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.930165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.446 [2024-12-06 09:57:40.930601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.446 [2024-12-06 09:57:40.930642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.446 [2024-12-06 09:57:40.930656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.930980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.930993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.447 [2024-12-06 09:57:40.931312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.447 [2024-12-06 09:57:40.931341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.447 [2024-12-06 09:57:40.931369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.447 [2024-12-06 09:57:40.931397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.447 [2024-12-06 09:57:40.931424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.447 [2024-12-06 09:57:40.931451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.447 [2024-12-06 09:57:40.931479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.447 [2024-12-06 09:57:40.931506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.447 [2024-12-06 09:57:40.931774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.447 [2024-12-06 09:57:40.931786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.931799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.448 [2024-12-06 09:57:40.931812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.931826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.448 [2024-12-06 09:57:40.931838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.931852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.448 [2024-12-06 09:57:40.931863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.931877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.448 [2024-12-06 09:57:40.931889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.931902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.448 [2024-12-06 09:57:40.931914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.931928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.448 [2024-12-06 09:57:40.931939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.931953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:36.448 [2024-12-06 09:57:40.931965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.931978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.448 [2024-12-06 09:57:40.931990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.448 [2024-12-06 09:57:40.932025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.448 [2024-12-06 09:57:40.932051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.448 [2024-12-06 09:57:40.932076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.448 [2024-12-06 09:57:40.932102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.448 [2024-12-06 09:57:40.932127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.448 [2024-12-06 09:57:40.932153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x967290 is same with the state(6) to be set 00:20:36.448 [2024-12-06 09:57:40.932180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10760 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11344 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11352 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11368 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11376 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11384 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11400 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11408 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11416 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.448 [2024-12-06 09:57:40.932686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.448 [2024-12-06 09:57:40.932695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.448 [2024-12-06 09:57:40.932717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:8 PRP1 0x0 PRP2 0x0 00:20:36.448 [2024-12-06 09:57:40.932730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.932743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.449 [2024-12-06 09:57:40.932752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.449 [2024-12-06 09:57:40.932761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11432 len:8 PRP1 0x0 PRP2 0x0 00:20:36.449 [2024-12-06 09:57:40.932773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.932785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.449 [2024-12-06 09:57:40.932794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.449 [2024-12-06 09:57:40.932803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11440 len:8 PRP1 0x0 PRP2 0x0 00:20:36.449 [2024-12-06 09:57:40.932814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.932826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.449 [2024-12-06 09:57:40.932835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.449 [2024-12-06 09:57:40.932844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11448 len:8 PRP1 0x0 PRP2 0x0 00:20:36.449 [2024-12-06 09:57:40.932856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.932867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.449 [2024-12-06 09:57:40.932876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.449 [2024-12-06 09:57:40.932885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:8 PRP1 0x0 PRP2 0x0 00:20:36.449 [2024-12-06 09:57:40.932897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.932909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:36.449 [2024-12-06 09:57:40.932918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:36.449 [2024-12-06 09:57:40.932932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11464 len:8 PRP1 0x0 PRP2 0x0 00:20:36.449 [2024-12-06 09:57:40.932944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.933152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.449 [2024-12-06 09:57:40.933179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.933193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.449 [2024-12-06 09:57:40.933204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.933217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.449 [2024-12-06 09:57:40.933228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.933240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.449 [2024-12-06 09:57:40.933263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.933276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.449 [2024-12-06 09:57:40.933289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.449 [2024-12-06 09:57:40.933313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d81e0 is same with the state(6) to be set 00:20:36.449 [2024-12-06 09:57:40.934389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:36.449 [2024-12-06 09:57:40.934428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d81e0 (9): Bad file descriptor 00:20:36.449 [2024-12-06 09:57:40.934855] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.449 [2024-12-06 09:57:40.934887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d81e0 with addr=10.0.0.3, port=4421 00:20:36.449 [2024-12-06 09:57:40.934903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d81e0 is same with the state(6) to be set 00:20:36.449 [2024-12-06 09:57:40.934983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d81e0 (9): Bad file descriptor 00:20:36.449 [2024-12-06 09:57:40.935015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:36.449 [2024-12-06 09:57:40.935030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:36.449 [2024-12-06 09:57:40.935044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:36.449 [2024-12-06 09:57:40.935057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:36.449 [2024-12-06 09:57:40.935070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:36.449 7934.31 IOPS, 30.99 MiB/s [2024-12-06T09:58:01.721Z] 7990.57 IOPS, 31.21 MiB/s [2024-12-06T09:58:01.721Z] 8044.71 IOPS, 31.42 MiB/s [2024-12-06T09:58:01.721Z] 8095.05 IOPS, 31.62 MiB/s [2024-12-06T09:58:01.721Z] 8142.98 IOPS, 31.81 MiB/s [2024-12-06T09:58:01.721Z] 8142.90 IOPS, 31.81 MiB/s [2024-12-06T09:58:01.721Z] 8130.93 IOPS, 31.76 MiB/s [2024-12-06T09:58:01.721Z] 8118.21 IOPS, 31.71 MiB/s [2024-12-06T09:58:01.721Z] 8126.25 IOPS, 31.74 MiB/s [2024-12-06T09:58:01.721Z] 8145.84 IOPS, 31.82 MiB/s [2024-12-06T09:58:01.721Z] [2024-12-06 09:57:50.996144] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:36.449 8171.30 IOPS, 31.92 MiB/s [2024-12-06T09:58:01.721Z] 8202.21 IOPS, 32.04 MiB/s [2024-12-06T09:58:01.721Z] 8230.83 IOPS, 32.15 MiB/s [2024-12-06T09:58:01.721Z] 8259.43 IOPS, 32.26 MiB/s [2024-12-06T09:58:01.721Z] 8285.28 IOPS, 32.36 MiB/s [2024-12-06T09:58:01.721Z] 8310.59 IOPS, 32.46 MiB/s [2024-12-06T09:58:01.721Z] 8334.92 IOPS, 32.56 MiB/s [2024-12-06T09:58:01.721Z] 8358.04 IOPS, 32.65 MiB/s [2024-12-06T09:58:01.721Z] 8381.04 IOPS, 32.74 MiB/s [2024-12-06T09:58:01.721Z] 8403.93 IOPS, 32.83 MiB/s [2024-12-06T09:58:01.721Z] Received shutdown signal, test time was about 55.678305 seconds 00:20:36.449 00:20:36.449 Latency(us) 00:20:36.449 [2024-12-06T09:58:01.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.449 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:36.449 Verification LBA range: start 0x0 length 0x4000 00:20:36.449 Nvme0n1 : 55.68 8417.46 32.88 0.00 0.00 15177.23 867.61 7015926.69 00:20:36.449 [2024-12-06T09:58:01.721Z] =================================================================================================================== 00:20:36.449 [2024-12-06T09:58:01.721Z] Total : 8417.46 32.88 0.00 0.00 15177.23 867.61 7015926.69 00:20:36.449 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.750 rmmod nvme_tcp 00:20:36.750 rmmod nvme_fabrics 00:20:36.750 rmmod nvme_keyring 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80660 ']' 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80660 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80660 ']' 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80660 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80660 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.750 killing process with pid 80660 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80660' 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80660 00:20:36.750 09:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80660 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:37.014 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:20:37.273 00:20:37.273 real 1m1.556s 00:20:37.273 user 2m49.446s 00:20:37.273 sys 0m19.596s 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:37.273 ************************************ 00:20:37.273 END TEST nvmf_host_multipath 00:20:37.273 ************************************ 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.273 ************************************ 00:20:37.273 START TEST nvmf_timeout 00:20:37.273 ************************************ 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:37.273 * Looking for test storage... 00:20:37.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:20:37.273 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.534 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:37.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.534 --rc genhtml_branch_coverage=1 00:20:37.535 --rc genhtml_function_coverage=1 00:20:37.535 --rc genhtml_legend=1 00:20:37.535 --rc geninfo_all_blocks=1 00:20:37.535 --rc geninfo_unexecuted_blocks=1 00:20:37.535 00:20:37.535 ' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:37.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.535 --rc genhtml_branch_coverage=1 00:20:37.535 --rc genhtml_function_coverage=1 00:20:37.535 --rc genhtml_legend=1 00:20:37.535 --rc geninfo_all_blocks=1 00:20:37.535 --rc geninfo_unexecuted_blocks=1 00:20:37.535 00:20:37.535 ' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:37.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.535 --rc genhtml_branch_coverage=1 00:20:37.535 --rc genhtml_function_coverage=1 00:20:37.535 --rc genhtml_legend=1 00:20:37.535 --rc geninfo_all_blocks=1 00:20:37.535 --rc geninfo_unexecuted_blocks=1 00:20:37.535 00:20:37.535 ' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:37.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.535 --rc genhtml_branch_coverage=1 00:20:37.535 --rc genhtml_function_coverage=1 00:20:37.535 --rc genhtml_legend=1 00:20:37.535 --rc geninfo_all_blocks=1 00:20:37.535 --rc geninfo_unexecuted_blocks=1 00:20:37.535 00:20:37.535 ' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.535 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:37.535 Cannot find device "nvmf_init_br" 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:37.535 Cannot find device "nvmf_init_br2" 00:20:37.535 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:37.536 Cannot find device "nvmf_tgt_br" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.536 Cannot find device "nvmf_tgt_br2" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:37.536 Cannot find device "nvmf_init_br" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:37.536 Cannot find device "nvmf_init_br2" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:37.536 Cannot find device "nvmf_tgt_br" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:37.536 Cannot find device "nvmf_tgt_br2" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:37.536 Cannot find device "nvmf_br" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:37.536 Cannot find device "nvmf_init_if" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:37.536 Cannot find device "nvmf_init_if2" 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:37.536 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:37.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:37.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:37.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:37.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:37.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:37.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:37.795 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:37.796 09:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:37.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:37.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:20:37.796 00:20:37.796 --- 10.0.0.3 ping statistics --- 00:20:37.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.796 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:37.796 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:37.796 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:20:37.796 00:20:37.796 --- 10.0.0.4 ping statistics --- 00:20:37.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.796 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:37.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:37.796 00:20:37.796 --- 10.0.0.1 ping statistics --- 00:20:37.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.796 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:37.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:37.796 00:20:37.796 --- 10.0.0.2 ping statistics --- 00:20:37.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.796 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81875 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81875 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81875 ']' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.796 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:38.055 [2024-12-06 09:58:03.121626] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:20:38.055 [2024-12-06 09:58:03.121709] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.055 [2024-12-06 09:58:03.268058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:38.055 [2024-12-06 09:58:03.313210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.055 [2024-12-06 09:58:03.313276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.055 [2024-12-06 09:58:03.313286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.055 [2024-12-06 09:58:03.313294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.055 [2024-12-06 09:58:03.313300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.055 [2024-12-06 09:58:03.314411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.055 [2024-12-06 09:58:03.314426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.315 [2024-12-06 09:58:03.367421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:38.315 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.315 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:38.315 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.315 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.315 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:38.315 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.315 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.315 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:38.575 [2024-12-06 09:58:03.679728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.575 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:38.834 Malloc0 00:20:38.834 09:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.093 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.352 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:39.611 [2024-12-06 09:58:04.703156] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81917 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81917 /var/tmp/bdevperf.sock 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81917 ']' 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.611 09:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:39.611 [2024-12-06 09:58:04.767605] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:20:39.611 [2024-12-06 09:58:04.767712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81917 ] 00:20:39.870 [2024-12-06 09:58:04.915837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.870 [2024-12-06 09:58:04.970059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.870 [2024-12-06 09:58:05.028601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:39.870 09:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.870 09:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:39.870 09:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:40.129 09:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:40.387 NVMe0n1 00:20:40.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81933 00:20:40.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.644 09:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:40.644 Running I/O for 10 seconds... 00:20:41.580 09:58:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:41.841 8988.00 IOPS, 35.11 MiB/s [2024-12-06T09:58:07.114Z] [2024-12-06 09:58:06.935648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.935983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.935992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.842 [2024-12-06 09:58:06.936140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.842 [2024-12-06 09:58:06.936158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.842 [2024-12-06 09:58:06.936176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.842 [2024-12-06 09:58:06.936193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.842 [2024-12-06 09:58:06.936211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.842 [2024-12-06 09:58:06.936228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.842 [2024-12-06 09:58:06.936245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.842 [2024-12-06 09:58:06.936262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.842 [2024-12-06 09:58:06.936384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.842 [2024-12-06 09:58:06.936393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.843 [2024-12-06 09:58:06.936556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.843 [2024-12-06 09:58:06.936575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.843 [2024-12-06 09:58:06.936603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.843 [2024-12-06 09:58:06.936637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.843 [2024-12-06 09:58:06.936655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.843 [2024-12-06 09:58:06.936673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.843 [2024-12-06 09:58:06.936692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.843 [2024-12-06 09:58:06.936709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.936983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.936992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.937001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.937009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.843 [2024-12-06 09:58:06.937018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.843 [2024-12-06 09:58:06.937026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:41.844 [2024-12-06 09:58:06.937658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.844 [2024-12-06 09:58:06.937711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.844 [2024-12-06 09:58:06.937721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.845 [2024-12-06 09:58:06.937729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.845 [2024-12-06 09:58:06.937746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.845 [2024-12-06 09:58:06.937764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.845 [2024-12-06 09:58:06.937786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9690 is same with the state(6) to be set 00:20:41.845 [2024-12-06 09:58:06.937806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.937813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.937820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87064 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.937828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.937842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.937855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87664 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.937863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.937878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.937884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87672 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.937908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.937938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.937945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87680 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.937953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.937967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.937974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87688 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.937982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.937991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.937997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87696 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87704 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87712 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87720 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87728 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87736 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87744 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87752 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87760 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:41.845 [2024-12-06 09:58:06.938294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:41.845 [2024-12-06 09:58:06.938301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87768 len:8 PRP1 0x0 PRP2 0x0 00:20:41.845 [2024-12-06 09:58:06.938311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.845 [2024-12-06 09:58:06.938454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.845 [2024-12-06 09:58:06.938478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.845 [2024-12-06 09:58:06.938496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 [2024-12-06 09:58:06.938505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.845 [2024-12-06 09:58:06.938513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.845 09:58:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:41.845 [2024-12-06 09:58:06.953343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799e50 is same with the state(6) to be set 00:20:41.845 [2024-12-06 09:58:06.953616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:41.845 [2024-12-06 09:58:06.953645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799e50 (9): Bad file descriptor 00:20:41.845 [2024-12-06 09:58:06.953747] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.845 [2024-12-06 09:58:06.953769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1799e50 with addr=10.0.0.3, port=4420 00:20:41.845 [2024-12-06 09:58:06.953779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799e50 is same with the state(6) to be set 00:20:41.845 [2024-12-06 09:58:06.953796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799e50 (9): Bad file descriptor 00:20:41.845 [2024-12-06 09:58:06.953811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:41.845 [2024-12-06 09:58:06.953821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:41.845 [2024-12-06 09:58:06.953831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:41.846 [2024-12-06 09:58:06.953840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:41.846 [2024-12-06 09:58:06.953850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:43.739 5422.00 IOPS, 21.18 MiB/s [2024-12-06T09:58:09.011Z] 3614.67 IOPS, 14.12 MiB/s [2024-12-06T09:58:09.011Z] [2024-12-06 09:58:08.954089] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.739 [2024-12-06 09:58:08.954163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1799e50 with addr=10.0.0.3, port=4420 00:20:43.739 [2024-12-06 09:58:08.954177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799e50 is same with the state(6) to be set 00:20:43.739 [2024-12-06 09:58:08.954201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799e50 (9): Bad file descriptor 00:20:43.739 [2024-12-06 09:58:08.954219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:43.739 [2024-12-06 09:58:08.954228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:43.739 [2024-12-06 09:58:08.954238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:43.739 [2024-12-06 09:58:08.954248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:43.739 [2024-12-06 09:58:08.954258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:43.739 09:58:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:43.739 09:58:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:43.739 09:58:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:43.998 09:58:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:43.998 09:58:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:43.998 09:58:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:43.998 09:58:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:44.257 09:58:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:44.257 09:58:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:45.895 2711.00 IOPS, 10.59 MiB/s [2024-12-06T09:58:11.167Z] 2168.80 IOPS, 8.47 MiB/s [2024-12-06T09:58:11.167Z] [2024-12-06 09:58:10.954503] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:45.895 [2024-12-06 09:58:10.954592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1799e50 with addr=10.0.0.3, port=4420 00:20:45.895 [2024-12-06 09:58:10.954611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799e50 is same with the state(6) to be set 00:20:45.895 [2024-12-06 09:58:10.954641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799e50 (9): Bad file descriptor 00:20:45.895 [2024-12-06 09:58:10.954674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:45.895 [2024-12-06 09:58:10.954685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:45.895 [2024-12-06 09:58:10.954698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:45.895 [2024-12-06 09:58:10.954710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:45.895 [2024-12-06 09:58:10.954723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:47.770 1807.33 IOPS, 7.06 MiB/s [2024-12-06T09:58:13.042Z] 1549.14 IOPS, 6.05 MiB/s [2024-12-06T09:58:13.042Z] [2024-12-06 09:58:12.954762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:47.770 [2024-12-06 09:58:12.954804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:47.770 [2024-12-06 09:58:12.954815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:47.770 [2024-12-06 09:58:12.954825] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:47.770 [2024-12-06 09:58:12.954836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:48.702 1355.50 IOPS, 5.29 MiB/s 00:20:48.702 Latency(us) 00:20:48.702 [2024-12-06T09:58:13.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.702 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:48.702 Verification LBA range: start 0x0 length 0x4000 00:20:48.702 NVMe0n1 : 8.20 1322.97 5.17 15.62 0.00 95684.20 2844.86 7046430.72 00:20:48.702 [2024-12-06T09:58:13.974Z] =================================================================================================================== 00:20:48.702 [2024-12-06T09:58:13.974Z] Total : 1322.97 5.17 15.62 0.00 95684.20 2844.86 7046430.72 00:20:48.702 { 00:20:48.702 "results": [ 00:20:48.702 { 00:20:48.702 "job": "NVMe0n1", 00:20:48.702 "core_mask": "0x4", 00:20:48.702 "workload": "verify", 00:20:48.702 "status": "finished", 00:20:48.702 "verify_range": { 00:20:48.702 "start": 0, 00:20:48.702 "length": 16384 00:20:48.702 }, 00:20:48.702 "queue_depth": 128, 00:20:48.702 "io_size": 4096, 00:20:48.702 "runtime": 8.196704, 00:20:48.702 "iops": 1322.9707941143172, 00:20:48.702 "mibps": 5.1678546645090515, 00:20:48.702 "io_failed": 128, 00:20:48.702 "io_timeout": 0, 00:20:48.702 "avg_latency_us": 95684.200544858, 00:20:48.702 "min_latency_us": 2844.858181818182, 00:20:48.702 "max_latency_us": 7046430.72 00:20:48.702 } 00:20:48.702 ], 00:20:48.702 "core_count": 1 00:20:48.702 } 00:20:49.267 09:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:49.267 09:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:49.267 09:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:49.525 09:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:49.525 09:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:49.525 09:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:49.525 09:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:49.784 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:49.784 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81933 00:20:49.784 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81917 00:20:49.784 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81917 ']' 00:20:49.784 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81917 00:20:49.784 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:50.042 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.042 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81917 00:20:50.042 killing process with pid 81917 00:20:50.042 Received shutdown signal, test time was about 9.322134 seconds 00:20:50.042 00:20:50.042 Latency(us) 00:20:50.042 [2024-12-06T09:58:15.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.042 [2024-12-06T09:58:15.314Z] =================================================================================================================== 00:20:50.042 [2024-12-06T09:58:15.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.042 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:50.042 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:50.042 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81917' 00:20:50.042 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81917 00:20:50.042 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81917 00:20:50.299 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:50.557 [2024-12-06 09:58:15.575046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:50.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82051 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82051 /var/tmp/bdevperf.sock 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82051 ']' 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.557 09:58:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:50.557 [2024-12-06 09:58:15.637687] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:20:50.557 [2024-12-06 09:58:15.637968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82051 ] 00:20:50.557 [2024-12-06 09:58:15.785622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.815 [2024-12-06 09:58:15.839393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.815 [2024-12-06 09:58:15.909826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:51.381 09:58:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.381 09:58:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:51.381 09:58:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:51.640 09:58:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:51.899 NVMe0n1 00:20:51.899 09:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82079 00:20:51.899 09:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:51.899 09:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.899 Running I/O for 10 seconds... 00:20:52.839 09:58:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:53.100 8255.00 IOPS, 32.25 MiB/s [2024-12-06T09:58:18.372Z] [2024-12-06 09:58:18.328506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.100 [2024-12-06 09:58:18.328772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.100 [2024-12-06 09:58:18.329006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.100 [2024-12-06 09:58:18.329211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.100 [2024-12-06 09:58:18.329339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.100 [2024-12-06 09:58:18.329459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.100 [2024-12-06 09:58:18.329521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.100 [2024-12-06 09:58:18.329655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.100 [2024-12-06 09:58:18.329714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.100 [2024-12-06 09:58:18.329833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.100 [2024-12-06 09:58:18.329889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.100 [2024-12-06 09:58:18.330038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.330101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.330221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.330276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.330391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.330446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.330565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.330637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.330751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.330813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.330927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.330983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.331087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.331141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.331283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.331340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.331454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.331509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.331645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.331700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.331826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.331881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.331983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.332438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.332458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.332489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.332508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.332530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.332549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.332567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.332924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.332986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.333072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.333099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.333118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.333139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.333159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.333187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.101 [2024-12-06 09:58:18.333206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.333230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.333249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.101 [2024-12-06 09:58:18.333267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.101 [2024-12-06 09:58:18.333277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.102 [2024-12-06 09:58:18.333679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.333989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.333998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.334006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.334016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.334024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.334033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.334042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.334052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.334075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.102 [2024-12-06 09:58:18.334085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.102 [2024-12-06 09:58:18.334093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:53.103 [2024-12-06 09:58:18.334788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.103 [2024-12-06 09:58:18.334871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.103 [2024-12-06 09:58:18.334879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.104 [2024-12-06 09:58:18.334889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.104 [2024-12-06 09:58:18.334897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.104 [2024-12-06 09:58:18.334907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.104 [2024-12-06 09:58:18.334917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.104 [2024-12-06 09:58:18.334927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485690 is same with the state(6) to be set 00:20:53.104 [2024-12-06 09:58:18.334938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:53.104 [2024-12-06 09:58:18.334946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:53.104 [2024-12-06 09:58:18.334953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80392 len:8 PRP1 0x0 PRP2 0x0 00:20:53.104 [2024-12-06 09:58:18.334962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.104 [2024-12-06 09:58:18.335260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:53.104 [2024-12-06 09:58:18.335357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1425e50 (9): Bad file descriptor 00:20:53.104 [2024-12-06 09:58:18.335455] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.104 [2024-12-06 09:58:18.335474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1425e50 with addr=10.0.0.3, port=4420 00:20:53.104 [2024-12-06 09:58:18.335485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1425e50 is same with the state(6) to be set 00:20:53.104 [2024-12-06 09:58:18.335502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1425e50 (9): Bad file descriptor 00:20:53.104 [2024-12-06 09:58:18.335527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:53.104 [2024-12-06 09:58:18.335536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:53.104 [2024-12-06 09:58:18.335548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:53.104 [2024-12-06 09:58:18.335580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:53.104 [2024-12-06 09:58:18.335604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:53.104 09:58:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:54.301 4989.00 IOPS, 19.49 MiB/s [2024-12-06T09:58:19.573Z] [2024-12-06 09:58:19.335686] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:54.301 [2024-12-06 09:58:19.335864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1425e50 with addr=10.0.0.3, port=4420 00:20:54.301 [2024-12-06 09:58:19.335996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1425e50 is same with the state(6) to be set 00:20:54.301 [2024-12-06 09:58:19.336129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1425e50 (9): Bad file descriptor 00:20:54.301 [2024-12-06 09:58:19.336196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:54.301 [2024-12-06 09:58:19.336365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:54.301 [2024-12-06 09:58:19.336418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:54.301 [2024-12-06 09:58:19.336565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:54.301 [2024-12-06 09:58:19.336709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:54.301 09:58:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:54.560 [2024-12-06 09:58:19.607118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:54.560 09:58:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82079 00:20:55.129 3326.00 IOPS, 12.99 MiB/s [2024-12-06T09:58:20.401Z] [2024-12-06 09:58:20.352814] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:57.001 2494.50 IOPS, 9.74 MiB/s [2024-12-06T09:58:23.209Z] 3762.20 IOPS, 14.70 MiB/s [2024-12-06T09:58:24.587Z] 4945.83 IOPS, 19.32 MiB/s [2024-12-06T09:58:25.176Z] 5786.71 IOPS, 22.60 MiB/s [2024-12-06T09:58:26.565Z] 6421.38 IOPS, 25.08 MiB/s [2024-12-06T09:58:27.504Z] 6911.44 IOPS, 27.00 MiB/s [2024-12-06T09:58:27.504Z] 7307.50 IOPS, 28.54 MiB/s 00:21:02.232 Latency(us) 00:21:02.232 [2024-12-06T09:58:27.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.232 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.232 Verification LBA range: start 0x0 length 0x4000 00:21:02.232 NVMe0n1 : 10.01 7312.66 28.57 0.00 0.00 17469.26 2621.44 3019898.88 00:21:02.232 [2024-12-06T09:58:27.504Z] =================================================================================================================== 00:21:02.232 [2024-12-06T09:58:27.504Z] Total : 7312.66 28.57 0.00 0.00 17469.26 2621.44 3019898.88 00:21:02.232 { 00:21:02.232 "results": [ 00:21:02.232 { 00:21:02.232 "job": "NVMe0n1", 00:21:02.232 "core_mask": "0x4", 00:21:02.232 "workload": "verify", 00:21:02.232 "status": "finished", 00:21:02.232 "verify_range": { 00:21:02.232 "start": 0, 00:21:02.232 "length": 16384 00:21:02.232 }, 00:21:02.232 "queue_depth": 128, 00:21:02.232 "io_size": 4096, 00:21:02.232 "runtime": 10.009349, 00:21:02.232 "iops": 7312.663390995758, 00:21:02.232 "mibps": 28.56509137107718, 00:21:02.232 "io_failed": 0, 00:21:02.232 "io_timeout": 0, 00:21:02.232 "avg_latency_us": 17469.263242372494, 00:21:02.232 "min_latency_us": 2621.44, 00:21:02.232 "max_latency_us": 3019898.88 00:21:02.232 } 00:21:02.232 ], 00:21:02.232 "core_count": 1 00:21:02.232 } 00:21:02.232 09:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82185 00:21:02.232 09:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:02.232 09:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:02.232 Running I/O for 10 seconds... 00:21:03.170 09:58:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:03.433 8341.00 IOPS, 32.58 MiB/s [2024-12-06T09:58:28.705Z] [2024-12-06 09:58:28.441224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.433 [2024-12-06 09:58:28.441286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.433 [2024-12-06 09:58:28.441306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.433 [2024-12-06 09:58:28.441316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.433 [2024-12-06 09:58:28.441327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.433 [2024-12-06 09:58:28.441335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.433 [2024-12-06 09:58:28.441346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.433 [2024-12-06 09:58:28.441354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.433 [2024-12-06 09:58:28.441364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.433 [2024-12-06 09:58:28.441372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.433 [2024-12-06 09:58:28.441382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.433 [2024-12-06 09:58:28.441391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.433 [2024-12-06 09:58:28.441401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.433 [2024-12-06 09:58:28.441409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.433 [2024-12-06 09:58:28.441419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.441979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.441990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.434 [2024-12-06 09:58:28.442248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.434 [2024-12-06 09:58:28.442259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.435 [2024-12-06 09:58:28.442910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.442985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.435 [2024-12-06 09:58:28.442995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.435 [2024-12-06 09:58:28.443003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:03.436 [2024-12-06 09:58:28.443057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.436 [2024-12-06 09:58:28.443798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.436 [2024-12-06 09:58:28.443806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.437 [2024-12-06 09:58:28.443816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.437 [2024-12-06 09:58:28.443824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.437 [2024-12-06 09:58:28.443834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.437 [2024-12-06 09:58:28.443843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.437 [2024-12-06 09:58:28.443852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14841b0 is same with the state(6) to be set 00:21:03.437 [2024-12-06 09:58:28.443863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:03.437 [2024-12-06 09:58:28.443870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:03.437 [2024-12-06 09:58:28.443878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76432 len:8 PRP1 0x0 PRP2 0x0 00:21:03.437 [2024-12-06 09:58:28.443894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.437 [2024-12-06 09:58:28.444183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:03.437 [2024-12-06 09:58:28.444270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1425e50 (9): Bad file descriptor 00:21:03.437 [2024-12-06 09:58:28.444375] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.437 [2024-12-06 09:58:28.444394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1425e50 with addr=10.0.0.3, port=4420 00:21:03.437 [2024-12-06 09:58:28.444405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1425e50 is same with the state(6) to be set 00:21:03.437 [2024-12-06 09:58:28.444421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1425e50 (9): Bad file descriptor 00:21:03.437 [2024-12-06 09:58:28.444452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:03.437 [2024-12-06 09:58:28.444461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:03.437 [2024-12-06 09:58:28.444471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:03.437 [2024-12-06 09:58:28.444481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:03.437 [2024-12-06 09:58:28.444491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:03.437 09:58:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:04.375 4746.50 IOPS, 18.54 MiB/s [2024-12-06T09:58:29.647Z] [2024-12-06 09:58:29.444581] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.375 [2024-12-06 09:58:29.444765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1425e50 with addr=10.0.0.3, port=4420 00:21:04.375 [2024-12-06 09:58:29.444897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1425e50 is same with the state(6) to be set 00:21:04.375 [2024-12-06 09:58:29.445043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1425e50 (9): Bad file descriptor 00:21:04.375 [2024-12-06 09:58:29.445111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:04.375 [2024-12-06 09:58:29.445235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:04.375 [2024-12-06 09:58:29.445288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:04.375 [2024-12-06 09:58:29.445384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:04.375 [2024-12-06 09:58:29.445459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:05.324 3164.33 IOPS, 12.36 MiB/s [2024-12-06T09:58:30.596Z] [2024-12-06 09:58:30.445593] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.324 [2024-12-06 09:58:30.445775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1425e50 with addr=10.0.0.3, port=4420 00:21:05.324 [2024-12-06 09:58:30.445796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1425e50 is same with the state(6) to be set 00:21:05.324 [2024-12-06 09:58:30.445817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1425e50 (9): Bad file descriptor 00:21:05.324 [2024-12-06 09:58:30.445833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:05.324 [2024-12-06 09:58:30.445842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:05.324 [2024-12-06 09:58:30.445851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:05.324 [2024-12-06 09:58:30.445860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:05.324 [2024-12-06 09:58:30.445870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:06.258 2373.25 IOPS, 9.27 MiB/s [2024-12-06T09:58:31.530Z] [2024-12-06 09:58:31.447882] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.258 [2024-12-06 09:58:31.448075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1425e50 with addr=10.0.0.3, port=4420 00:21:06.258 [2024-12-06 09:58:31.448097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1425e50 is same with the state(6) to be set 00:21:06.259 [2024-12-06 09:58:31.448321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1425e50 (9): Bad file descriptor 00:21:06.259 [2024-12-06 09:58:31.448549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:06.259 [2024-12-06 09:58:31.448561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:06.259 [2024-12-06 09:58:31.448585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:06.259 [2024-12-06 09:58:31.448609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:06.259 [2024-12-06 09:58:31.448621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:06.259 09:58:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:06.517 [2024-12-06 09:58:31.714830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:06.517 09:58:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82185 00:21:07.345 1898.60 IOPS, 7.42 MiB/s [2024-12-06T09:58:32.617Z] [2024-12-06 09:58:32.474046] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:21:09.220 3156.67 IOPS, 12.33 MiB/s [2024-12-06T09:58:35.429Z] 4332.00 IOPS, 16.92 MiB/s [2024-12-06T09:58:36.372Z] 5210.75 IOPS, 20.35 MiB/s [2024-12-06T09:58:37.747Z] 5892.00 IOPS, 23.02 MiB/s [2024-12-06T09:58:37.747Z] 6438.00 IOPS, 25.15 MiB/s 00:21:12.475 Latency(us) 00:21:12.475 [2024-12-06T09:58:37.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.475 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:12.475 Verification LBA range: start 0x0 length 0x4000 00:21:12.475 NVMe0n1 : 10.01 6445.28 25.18 4363.80 0.00 11820.18 569.72 3019898.88 00:21:12.475 [2024-12-06T09:58:37.747Z] =================================================================================================================== 00:21:12.475 [2024-12-06T09:58:37.747Z] Total : 6445.28 25.18 4363.80 0.00 11820.18 0.00 3019898.88 00:21:12.475 { 00:21:12.475 "results": [ 00:21:12.475 { 00:21:12.475 "job": "NVMe0n1", 00:21:12.475 "core_mask": "0x4", 00:21:12.475 "workload": "verify", 00:21:12.475 "status": "finished", 00:21:12.475 "verify_range": { 00:21:12.475 "start": 0, 00:21:12.475 "length": 16384 00:21:12.475 }, 00:21:12.475 "queue_depth": 128, 00:21:12.475 "io_size": 4096, 00:21:12.475 "runtime": 10.007325, 00:21:12.475 "iops": 6445.278833254641, 00:21:12.475 "mibps": 25.17687044240094, 00:21:12.475 "io_failed": 43670, 00:21:12.475 "io_timeout": 0, 00:21:12.475 "avg_latency_us": 11820.178688966023, 00:21:12.475 "min_latency_us": 569.7163636363637, 00:21:12.475 "max_latency_us": 3019898.88 00:21:12.475 } 00:21:12.475 ], 00:21:12.475 "core_count": 1 00:21:12.475 } 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82051 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82051 ']' 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82051 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82051 00:21:12.475 killing process with pid 82051 00:21:12.475 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.475 00:21:12.475 Latency(us) 00:21:12.475 [2024-12-06T09:58:37.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.475 [2024-12-06T09:58:37.747Z] =================================================================================================================== 00:21:12.475 [2024-12-06T09:58:37.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82051' 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82051 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82051 00:21:12.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82299 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82299 /var/tmp/bdevperf.sock 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82299 ']' 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.475 09:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:12.475 [2024-12-06 09:58:37.671304] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:21:12.475 [2024-12-06 09:58:37.671607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82299 ] 00:21:12.733 [2024-12-06 09:58:37.812256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.733 [2024-12-06 09:58:37.857181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.733 [2024-12-06 09:58:37.925537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:13.667 09:58:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.667 09:58:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:13.667 09:58:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82315 00:21:13.667 09:58:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82299 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:13.668 09:58:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:13.926 09:58:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:14.184 NVMe0n1 00:21:14.184 09:58:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82351 00:21:14.184 09:58:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.184 09:58:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:14.184 Running I/O for 10 seconds... 00:21:15.119 09:58:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:15.381 17526.00 IOPS, 68.46 MiB/s [2024-12-06T09:58:40.653Z] [2024-12-06 09:58:40.578110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.381 [2024-12-06 09:58:40.578836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.578992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124e10 is same with the state(6) to be set 00:21:15.382 [2024-12-06 09:58:40.579622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.382 [2024-12-06 09:58:40.579651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.579985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.579995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.383 [2024-12-06 09:58:40.580377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.383 [2024-12-06 09:58:40.580384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.580989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.580997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.581007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.581015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.581025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.581033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.581043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.581051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.581060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.581068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.581077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.581086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.581095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.384 [2024-12-06 09:58:40.581103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.384 [2024-12-06 09:58:40.581113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.385 [2024-12-06 09:58:40.581791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.385 [2024-12-06 09:58:40.581801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.581987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.581995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.582005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.582013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.582023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.386 [2024-12-06 09:58:40.582031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.582041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2095920 is same with the state(6) to be set 00:21:15.386 [2024-12-06 09:58:40.582061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:15.386 [2024-12-06 09:58:40.582068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:15.386 [2024-12-06 09:58:40.582081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104664 len:8 PRP1 0x0 PRP2 0x0 00:21:15.386 [2024-12-06 09:58:40.582090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.386 [2024-12-06 09:58:40.582405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:15.386 [2024-12-06 09:58:40.582481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2028e50 (9): Bad file descriptor 00:21:15.386 [2024-12-06 09:58:40.583035] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.386 [2024-12-06 09:58:40.583191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2028e50 with addr=10.0.0.3, port=4420 00:21:15.386 [2024-12-06 09:58:40.583449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2028e50 is same with the state(6) to be set 00:21:15.386 [2024-12-06 09:58:40.583629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2028e50 (9): Bad file descriptor 00:21:15.386 [2024-12-06 09:58:40.583790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:15.386 [2024-12-06 09:58:40.583866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:15.386 [2024-12-06 09:58:40.583998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:15.386 [2024-12-06 09:58:40.584046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:15.386 [2024-12-06 09:58:40.584099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:15.386 09:58:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82351 00:21:17.261 9907.00 IOPS, 38.70 MiB/s [2024-12-06T09:58:42.792Z] 6604.67 IOPS, 25.80 MiB/s [2024-12-06T09:58:42.792Z] [2024-12-06 09:58:42.584309] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.520 [2024-12-06 09:58:42.584494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2028e50 with addr=10.0.0.3, port=4420 00:21:17.520 [2024-12-06 09:58:42.584651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2028e50 is same with the state(6) to be set 00:21:17.520 [2024-12-06 09:58:42.584721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2028e50 (9): Bad file descriptor 00:21:17.520 [2024-12-06 09:58:42.584910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:17.520 [2024-12-06 09:58:42.584923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:17.520 [2024-12-06 09:58:42.584932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:17.520 [2024-12-06 09:58:42.584942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:17.520 [2024-12-06 09:58:42.584961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:19.396 4953.50 IOPS, 19.35 MiB/s [2024-12-06T09:58:44.668Z] 3962.80 IOPS, 15.48 MiB/s [2024-12-06T09:58:44.668Z] [2024-12-06 09:58:44.585057] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.396 [2024-12-06 09:58:44.585249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2028e50 with addr=10.0.0.3, port=4420 00:21:19.396 [2024-12-06 09:58:44.585271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2028e50 is same with the state(6) to be set 00:21:19.396 [2024-12-06 09:58:44.585293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2028e50 (9): Bad file descriptor 00:21:19.396 [2024-12-06 09:58:44.585310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:19.396 [2024-12-06 09:58:44.585320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:19.396 [2024-12-06 09:58:44.585329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:19.396 [2024-12-06 09:58:44.585339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:19.396 [2024-12-06 09:58:44.585348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:21.274 3302.33 IOPS, 12.90 MiB/s [2024-12-06T09:58:46.808Z] 2830.57 IOPS, 11.06 MiB/s [2024-12-06T09:58:46.808Z] [2024-12-06 09:58:46.585396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:21.536 [2024-12-06 09:58:46.585428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:21.536 [2024-12-06 09:58:46.585451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:21.536 [2024-12-06 09:58:46.585459] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:21:21.536 [2024-12-06 09:58:46.585469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:22.469 2476.75 IOPS, 9.67 MiB/s 00:21:22.469 Latency(us) 00:21:22.469 [2024-12-06T09:58:47.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.469 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:22.469 NVMe0n1 : 8.14 2435.32 9.51 15.73 0.00 52144.47 7030.23 7015926.69 00:21:22.469 [2024-12-06T09:58:47.741Z] =================================================================================================================== 00:21:22.469 [2024-12-06T09:58:47.741Z] Total : 2435.32 9.51 15.73 0.00 52144.47 7030.23 7015926.69 00:21:22.469 { 00:21:22.469 "results": [ 00:21:22.469 { 00:21:22.469 "job": "NVMe0n1", 00:21:22.469 "core_mask": "0x4", 00:21:22.469 "workload": "randread", 00:21:22.469 "status": "finished", 00:21:22.469 "queue_depth": 128, 00:21:22.469 "io_size": 4096, 00:21:22.469 "runtime": 8.136103, 00:21:22.469 "iops": 2435.318235277995, 00:21:22.469 "mibps": 9.512961856554668, 00:21:22.469 "io_failed": 128, 00:21:22.469 "io_timeout": 0, 00:21:22.469 "avg_latency_us": 52144.46595052926, 00:21:22.469 "min_latency_us": 7030.225454545454, 00:21:22.469 "max_latency_us": 7015926.69090909 00:21:22.469 } 00:21:22.469 ], 00:21:22.469 "core_count": 1 00:21:22.469 } 00:21:22.469 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:22.469 Attaching 5 probes... 00:21:22.470 1486.987997: reset bdev controller NVMe0 00:21:22.470 1487.547811: reconnect bdev controller NVMe0 00:21:22.470 3488.841923: reconnect delay bdev controller NVMe0 00:21:22.470 3488.855649: reconnect bdev controller NVMe0 00:21:22.470 5489.592354: reconnect delay bdev controller NVMe0 00:21:22.470 5489.604718: reconnect bdev controller NVMe0 00:21:22.470 7489.977722: reconnect delay bdev controller NVMe0 00:21:22.470 7489.991342: reconnect bdev controller NVMe0 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82315 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82299 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82299 ']' 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82299 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82299 00:21:22.470 killing process with pid 82299 00:21:22.470 Received shutdown signal, test time was about 8.202985 seconds 00:21:22.470 00:21:22.470 Latency(us) 00:21:22.470 [2024-12-06T09:58:47.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.470 [2024-12-06T09:58:47.742Z] =================================================================================================================== 00:21:22.470 [2024-12-06T09:58:47.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82299' 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82299 00:21:22.470 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82299 00:21:22.727 09:58:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.985 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:22.985 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:22.985 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:22.985 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:21:22.985 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:22.985 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:21:22.985 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.985 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:22.985 rmmod nvme_tcp 00:21:22.985 rmmod nvme_fabrics 00:21:23.244 rmmod nvme_keyring 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81875 ']' 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81875 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81875 ']' 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81875 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81875 00:21:23.244 killing process with pid 81875 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81875' 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81875 00:21:23.244 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81875 00:21:23.520 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.520 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:23.520 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:23.520 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:21:23.520 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:23.521 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:21:23.841 ************************************ 00:21:23.841 END TEST nvmf_timeout 00:21:23.841 ************************************ 00:21:23.841 00:21:23.841 real 0m46.428s 00:21:23.841 user 2m15.803s 00:21:23.841 sys 0m5.639s 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:23.841 ************************************ 00:21:23.841 END TEST nvmf_host 00:21:23.841 ************************************ 00:21:23.841 00:21:23.841 real 5m10.210s 00:21:23.841 user 13m29.399s 00:21:23.841 sys 1m11.743s 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.841 09:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.841 09:58:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:21:23.841 09:58:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:21:23.841 ************************************ 00:21:23.841 END TEST nvmf_tcp 00:21:23.841 ************************************ 00:21:23.841 00:21:23.841 real 12m48.071s 00:21:23.841 user 30m40.472s 00:21:23.841 sys 3m17.854s 00:21:23.841 09:58:48 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.841 09:58:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:23.841 09:58:48 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:21:23.841 09:58:48 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:23.841 09:58:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.841 09:58:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.841 09:58:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.841 ************************************ 00:21:23.841 START TEST nvmf_dif 00:21:23.841 ************************************ 00:21:23.841 09:58:48 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:23.841 * Looking for test storage... 00:21:24.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.101 09:58:49 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:24.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.101 --rc genhtml_branch_coverage=1 00:21:24.101 --rc genhtml_function_coverage=1 00:21:24.101 --rc genhtml_legend=1 00:21:24.101 --rc geninfo_all_blocks=1 00:21:24.101 --rc geninfo_unexecuted_blocks=1 00:21:24.101 00:21:24.101 ' 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:24.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.101 --rc genhtml_branch_coverage=1 00:21:24.101 --rc genhtml_function_coverage=1 00:21:24.101 --rc genhtml_legend=1 00:21:24.101 --rc geninfo_all_blocks=1 00:21:24.101 --rc geninfo_unexecuted_blocks=1 00:21:24.101 00:21:24.101 ' 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:24.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.101 --rc genhtml_branch_coverage=1 00:21:24.101 --rc genhtml_function_coverage=1 00:21:24.101 --rc genhtml_legend=1 00:21:24.101 --rc geninfo_all_blocks=1 00:21:24.101 --rc geninfo_unexecuted_blocks=1 00:21:24.101 00:21:24.101 ' 00:21:24.101 09:58:49 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:24.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.101 --rc genhtml_branch_coverage=1 00:21:24.101 --rc genhtml_function_coverage=1 00:21:24.101 --rc genhtml_legend=1 00:21:24.101 --rc geninfo_all_blocks=1 00:21:24.101 --rc geninfo_unexecuted_blocks=1 00:21:24.101 00:21:24.101 ' 00:21:24.101 09:58:49 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:24.101 09:58:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:24.102 09:58:49 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.102 09:58:49 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.102 09:58:49 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.102 09:58:49 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.102 09:58:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.102 09:58:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.102 09:58:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.102 09:58:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:24.102 09:58:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.102 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.102 09:58:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:24.102 09:58:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:24.102 09:58:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:24.102 09:58:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:24.102 09:58:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.102 09:58:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:24.102 09:58:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:24.102 Cannot find device "nvmf_init_br" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:24.102 Cannot find device "nvmf_init_br2" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:24.102 Cannot find device "nvmf_tgt_br" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@164 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:24.102 Cannot find device "nvmf_tgt_br2" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@165 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:24.102 Cannot find device "nvmf_init_br" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@166 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:24.102 Cannot find device "nvmf_init_br2" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@167 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:24.102 Cannot find device "nvmf_tgt_br" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@168 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:24.102 Cannot find device "nvmf_tgt_br2" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@169 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:24.102 Cannot find device "nvmf_br" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@170 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:24.102 Cannot find device "nvmf_init_if" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@171 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:24.102 Cannot find device "nvmf_init_if2" 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@172 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@173 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@174 -- # true 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:24.102 09:58:49 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:24.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:24.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:21:24.361 00:21:24.361 --- 10.0.0.3 ping statistics --- 00:21:24.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.361 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:24.361 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:24.361 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:21:24.361 00:21:24.361 --- 10.0.0.4 ping statistics --- 00:21:24.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.361 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:24.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:21:24.361 00:21:24.361 --- 10.0.0.1 ping statistics --- 00:21:24.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.361 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:24.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:21:24.361 00:21:24.361 --- 10.0.0.2 ping statistics --- 00:21:24.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.361 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:24.361 09:58:49 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:24.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:24.880 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:24.880 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.880 09:58:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:24.880 09:58:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.880 09:58:49 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.880 09:58:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82847 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:24.880 09:58:49 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82847 00:21:24.880 09:58:49 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82847 ']' 00:21:24.880 09:58:49 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.880 09:58:49 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.880 09:58:49 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.880 09:58:49 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.880 09:58:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:24.880 [2024-12-06 09:58:50.035399] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:21:24.880 [2024-12-06 09:58:50.035496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.140 [2024-12-06 09:58:50.189148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.140 [2024-12-06 09:58:50.246047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.140 [2024-12-06 09:58:50.246114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.140 [2024-12-06 09:58:50.246128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.140 [2024-12-06 09:58:50.246138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.140 [2024-12-06 09:58:50.246147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.140 [2024-12-06 09:58:50.246634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.140 [2024-12-06 09:58:50.321138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:25.140 09:58:50 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.140 09:58:50 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:21:25.140 09:58:50 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.140 09:58:50 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.140 09:58:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:25.400 09:58:50 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.400 09:58:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:25.400 09:58:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:25.400 09:58:50 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.400 09:58:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:25.400 [2024-12-06 09:58:50.444335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.400 09:58:50 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.400 09:58:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:25.400 09:58:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.400 09:58:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.400 09:58:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:25.400 ************************************ 00:21:25.400 START TEST fio_dif_1_default 00:21:25.400 ************************************ 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:25.400 bdev_null0 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:25.400 [2024-12-06 09:58:50.488534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:25.400 { 00:21:25.400 "params": { 00:21:25.400 "name": "Nvme$subsystem", 00:21:25.400 "trtype": "$TEST_TRANSPORT", 00:21:25.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.400 "adrfam": "ipv4", 00:21:25.400 "trsvcid": "$NVMF_PORT", 00:21:25.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.400 "hdgst": ${hdgst:-false}, 00:21:25.400 "ddgst": ${ddgst:-false} 00:21:25.400 }, 00:21:25.400 "method": "bdev_nvme_attach_controller" 00:21:25.400 } 00:21:25.400 EOF 00:21:25.400 )") 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:25.400 "params": { 00:21:25.400 "name": "Nvme0", 00:21:25.400 "trtype": "tcp", 00:21:25.400 "traddr": "10.0.0.3", 00:21:25.400 "adrfam": "ipv4", 00:21:25.400 "trsvcid": "4420", 00:21:25.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:25.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:25.400 "hdgst": false, 00:21:25.400 "ddgst": false 00:21:25.400 }, 00:21:25.400 "method": "bdev_nvme_attach_controller" 00:21:25.400 }' 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.400 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:25.401 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:25.401 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:25.401 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:25.401 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:25.401 09:58:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.660 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:25.660 fio-3.35 00:21:25.660 Starting 1 thread 00:21:37.870 00:21:37.870 filename0: (groupid=0, jobs=1): err= 0: pid=82906: Fri Dec 6 09:59:01 2024 00:21:37.870 read: IOPS=9961, BW=38.9MiB/s (40.8MB/s)(389MiB/10001msec) 00:21:37.870 slat (nsec): min=5976, max=85384, avg=7982.66, stdev=3647.35 00:21:37.870 clat (usec): min=326, max=2296, avg=377.97, stdev=39.75 00:21:37.870 lat (usec): min=332, max=2326, avg=385.95, stdev=40.53 00:21:37.870 clat percentiles (usec): 00:21:37.870 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 351], 00:21:37.870 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:21:37.870 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 445], 00:21:37.870 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 545], 99.95th=[ 570], 00:21:37.870 | 99.99th=[ 1336] 00:21:37.870 bw ( KiB/s): min=34912, max=42208, per=99.88%, avg=39796.21, stdev=1983.92, samples=19 00:21:37.870 iops : min= 8728, max=10552, avg=9949.05, stdev=495.98, samples=19 00:21:37.870 lat (usec) : 500=99.52%, 750=0.45%, 1000=0.02% 00:21:37.870 lat (msec) : 2=0.01%, 4=0.01% 00:21:37.870 cpu : usr=83.95%, sys=14.07%, ctx=20, majf=0, minf=9 00:21:37.870 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.870 issued rwts: total=99620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.870 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:37.870 00:21:37.870 Run status group 0 (all jobs): 00:21:37.870 READ: bw=38.9MiB/s (40.8MB/s), 38.9MiB/s-38.9MiB/s (40.8MB/s-40.8MB/s), io=389MiB (408MB), run=10001-10001msec 00:21:37.870 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 00:21:37.871 real 0m11.036s 00:21:37.871 user 0m9.063s 00:21:37.871 sys 0m1.687s 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.871 ************************************ 00:21:37.871 END TEST fio_dif_1_default 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 ************************************ 00:21:37.871 09:59:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:37.871 09:59:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:37.871 09:59:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 ************************************ 00:21:37.871 START TEST fio_dif_1_multi_subsystems 00:21:37.871 ************************************ 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 bdev_null0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 [2024-12-06 09:59:01.573063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 bdev_null1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.871 { 00:21:37.871 "params": { 00:21:37.871 "name": "Nvme$subsystem", 00:21:37.871 "trtype": "$TEST_TRANSPORT", 00:21:37.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.871 "adrfam": "ipv4", 00:21:37.871 "trsvcid": "$NVMF_PORT", 00:21:37.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.871 "hdgst": ${hdgst:-false}, 00:21:37.871 "ddgst": ${ddgst:-false} 00:21:37.871 }, 00:21:37.871 "method": "bdev_nvme_attach_controller" 00:21:37.871 } 00:21:37.871 EOF 00:21:37.871 )") 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:37.871 { 00:21:37.871 "params": { 00:21:37.871 "name": "Nvme$subsystem", 00:21:37.871 "trtype": "$TEST_TRANSPORT", 00:21:37.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:37.871 "adrfam": "ipv4", 00:21:37.871 "trsvcid": "$NVMF_PORT", 00:21:37.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:37.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:37.871 "hdgst": ${hdgst:-false}, 00:21:37.871 "ddgst": ${ddgst:-false} 00:21:37.871 }, 00:21:37.871 "method": "bdev_nvme_attach_controller" 00:21:37.871 } 00:21:37.871 EOF 00:21:37.871 )") 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:37.871 "params": { 00:21:37.871 "name": "Nvme0", 00:21:37.871 "trtype": "tcp", 00:21:37.871 "traddr": "10.0.0.3", 00:21:37.871 "adrfam": "ipv4", 00:21:37.871 "trsvcid": "4420", 00:21:37.871 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.871 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:37.871 "hdgst": false, 00:21:37.871 "ddgst": false 00:21:37.871 }, 00:21:37.871 "method": "bdev_nvme_attach_controller" 00:21:37.871 },{ 00:21:37.871 "params": { 00:21:37.871 "name": "Nvme1", 00:21:37.871 "trtype": "tcp", 00:21:37.871 "traddr": "10.0.0.3", 00:21:37.871 "adrfam": "ipv4", 00:21:37.871 "trsvcid": "4420", 00:21:37.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.871 "hdgst": false, 00:21:37.871 "ddgst": false 00:21:37.871 }, 00:21:37.871 "method": "bdev_nvme_attach_controller" 00:21:37.871 }' 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:37.871 09:59:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:37.871 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:37.871 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:37.871 fio-3.35 00:21:37.871 Starting 2 threads 00:21:47.846 00:21:47.846 filename0: (groupid=0, jobs=1): err= 0: pid=83066: Fri Dec 6 09:59:12 2024 00:21:47.846 read: IOPS=5336, BW=20.8MiB/s (21.9MB/s)(208MiB/10001msec) 00:21:47.846 slat (usec): min=5, max=531, avg=13.35, stdev= 7.10 00:21:47.846 clat (usec): min=341, max=2658, avg=713.35, stdev=88.19 00:21:47.846 lat (usec): min=348, max=2670, avg=726.70, stdev=89.94 00:21:47.846 clat percentiles (usec): 00:21:47.846 | 1.00th=[ 586], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 660], 00:21:47.846 | 30.00th=[ 668], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 701], 00:21:47.846 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 824], 95.00th=[ 898], 00:21:47.846 | 99.00th=[ 1020], 99.50th=[ 1074], 99.90th=[ 1237], 99.95th=[ 1303], 00:21:47.846 | 99.99th=[ 1483] 00:21:47.846 bw ( KiB/s): min=17536, max=23392, per=50.11%, avg=21394.53, stdev=1621.84, samples=19 00:21:47.847 iops : min= 4384, max= 5848, avg=5348.63, stdev=405.46, samples=19 00:21:47.847 lat (usec) : 500=0.02%, 750=78.97%, 1000=19.67% 00:21:47.847 lat (msec) : 2=1.33%, 4=0.01% 00:21:47.847 cpu : usr=90.19%, sys=8.19%, ctx=156, majf=0, minf=0 00:21:47.847 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:47.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.847 issued rwts: total=53372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.847 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:47.847 filename1: (groupid=0, jobs=1): err= 0: pid=83067: Fri Dec 6 09:59:12 2024 00:21:47.847 read: IOPS=5337, BW=20.8MiB/s (21.9MB/s)(209MiB/10001msec) 00:21:47.847 slat (usec): min=5, max=127, avg=13.75, stdev= 6.52 00:21:47.847 clat (usec): min=371, max=2653, avg=710.38, stdev=84.56 00:21:47.847 lat (usec): min=382, max=2665, avg=724.13, stdev=86.25 00:21:47.847 clat percentiles (usec): 00:21:47.847 | 1.00th=[ 619], 5.00th=[ 635], 10.00th=[ 644], 20.00th=[ 652], 00:21:47.847 | 30.00th=[ 668], 40.00th=[ 676], 50.00th=[ 685], 60.00th=[ 701], 00:21:47.847 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 816], 95.00th=[ 889], 00:21:47.847 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1221], 99.95th=[ 1254], 00:21:47.847 | 99.99th=[ 1401] 00:21:47.847 bw ( KiB/s): min=17536, max=23424, per=50.11%, avg=21397.89, stdev=1625.47, samples=19 00:21:47.847 iops : min= 4384, max= 5856, avg=5349.47, stdev=406.37, samples=19 00:21:47.847 lat (usec) : 500=0.02%, 750=79.99%, 1000=18.76% 00:21:47.847 lat (msec) : 2=1.22%, 4=0.01% 00:21:47.847 cpu : usr=90.30%, sys=8.26%, ctx=114, majf=0, minf=0 00:21:47.847 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:47.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.847 issued rwts: total=53380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.847 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:47.847 00:21:47.847 Run status group 0 (all jobs): 00:21:47.847 READ: bw=41.7MiB/s (43.7MB/s), 20.8MiB/s-20.8MiB/s (21.9MB/s-21.9MB/s), io=417MiB (437MB), run=10001-10001msec 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.847 00:21:47.847 real 0m11.149s 00:21:47.847 user 0m18.791s 00:21:47.847 sys 0m1.930s 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.847 09:59:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:47.847 ************************************ 00:21:47.847 END TEST fio_dif_1_multi_subsystems 00:21:47.847 ************************************ 00:21:47.847 09:59:12 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:47.847 09:59:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:47.847 09:59:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.847 09:59:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:47.847 ************************************ 00:21:47.847 START TEST fio_dif_rand_params 00:21:47.847 ************************************ 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:47.847 bdev_null0 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:47.847 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:47.848 [2024-12-06 09:59:12.779730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.848 { 00:21:47.848 "params": { 00:21:47.848 "name": "Nvme$subsystem", 00:21:47.848 "trtype": "$TEST_TRANSPORT", 00:21:47.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.848 "adrfam": "ipv4", 00:21:47.848 "trsvcid": "$NVMF_PORT", 00:21:47.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.848 "hdgst": ${hdgst:-false}, 00:21:47.848 "ddgst": ${ddgst:-false} 00:21:47.848 }, 00:21:47.848 "method": "bdev_nvme_attach_controller" 00:21:47.848 } 00:21:47.848 EOF 00:21:47.848 )") 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:47.848 "params": { 00:21:47.848 "name": "Nvme0", 00:21:47.848 "trtype": "tcp", 00:21:47.848 "traddr": "10.0.0.3", 00:21:47.848 "adrfam": "ipv4", 00:21:47.848 "trsvcid": "4420", 00:21:47.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:47.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:47.848 "hdgst": false, 00:21:47.848 "ddgst": false 00:21:47.848 }, 00:21:47.848 "method": "bdev_nvme_attach_controller" 00:21:47.848 }' 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:47.848 09:59:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:47.848 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:47.848 ... 00:21:47.848 fio-3.35 00:21:47.848 Starting 3 threads 00:21:54.414 00:21:54.414 filename0: (groupid=0, jobs=1): err= 0: pid=83228: Fri Dec 6 09:59:18 2024 00:21:54.414 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(182MiB/5003msec) 00:21:54.414 slat (nsec): min=6596, max=60873, avg=12721.88, stdev=7149.62 00:21:54.414 clat (usec): min=8523, max=11633, avg=10262.48, stdev=249.77 00:21:54.414 lat (usec): min=8530, max=11654, avg=10275.20, stdev=249.84 00:21:54.414 clat percentiles (usec): 00:21:54.414 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:21:54.414 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:21:54.414 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10552], 95.00th=[10683], 00:21:54.414 | 99.00th=[10945], 99.50th=[10945], 99.90th=[11600], 99.95th=[11600], 00:21:54.414 | 99.99th=[11600] 00:21:54.414 bw ( KiB/s): min=36096, max=38400, per=33.39%, avg=37376.00, stdev=665.11, samples=9 00:21:54.414 iops : min= 282, max= 300, avg=292.00, stdev= 5.20, samples=9 00:21:54.414 lat (msec) : 10=7.82%, 20=92.18% 00:21:54.415 cpu : usr=94.60%, sys=4.86%, ctx=11, majf=0, minf=0 00:21:54.415 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:54.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.415 issued rwts: total=1458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.415 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:54.415 filename0: (groupid=0, jobs=1): err= 0: pid=83229: Fri Dec 6 09:59:18 2024 00:21:54.415 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(182MiB/5004msec) 00:21:54.415 slat (nsec): min=6622, max=52106, avg=11146.15, stdev=5759.58 00:21:54.415 clat (usec): min=7094, max=11637, avg=10266.84, stdev=282.42 00:21:54.415 lat (usec): min=7101, max=11657, avg=10277.99, stdev=282.47 00:21:54.415 clat percentiles (usec): 00:21:54.415 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:21:54.415 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:21:54.415 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10552], 95.00th=[10683], 00:21:54.415 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11600], 99.95th=[11600], 00:21:54.415 | 99.99th=[11600] 00:21:54.415 bw ( KiB/s): min=36096, max=38400, per=33.39%, avg=37376.00, stdev=665.11, samples=9 00:21:54.415 iops : min= 282, max= 300, avg=292.00, stdev= 5.20, samples=9 00:21:54.415 lat (msec) : 10=7.48%, 20=92.52% 00:21:54.415 cpu : usr=95.16%, sys=4.26%, ctx=6, majf=0, minf=0 00:21:54.415 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:54.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.415 issued rwts: total=1458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.415 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:54.415 filename0: (groupid=0, jobs=1): err= 0: pid=83230: Fri Dec 6 09:59:18 2024 00:21:54.415 read: IOPS=291, BW=36.5MiB/s (38.3MB/s)(183MiB/5005msec) 00:21:54.415 slat (nsec): min=6377, max=68640, avg=12588.89, stdev=7161.65 00:21:54.415 clat (usec): min=3770, max=11087, avg=10245.27, stdev=373.14 00:21:54.415 lat (usec): min=3779, max=11128, avg=10257.85, stdev=373.19 00:21:54.415 clat percentiles (usec): 00:21:54.415 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10028], 00:21:54.415 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:21:54.415 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10552], 95.00th=[10683], 00:21:54.415 | 99.00th=[10945], 99.50th=[10945], 99.90th=[11076], 99.95th=[11076], 00:21:54.415 | 99.99th=[11076] 00:21:54.415 bw ( KiB/s): min=36096, max=38400, per=33.34%, avg=37324.80, stdev=741.96, samples=10 00:21:54.415 iops : min= 282, max= 300, avg=291.60, stdev= 5.80, samples=10 00:21:54.415 lat (msec) : 4=0.21%, 10=8.90%, 20=90.90% 00:21:54.415 cpu : usr=94.70%, sys=4.76%, ctx=7, majf=0, minf=0 00:21:54.415 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:54.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:54.415 issued rwts: total=1461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:54.415 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:54.415 00:21:54.415 Run status group 0 (all jobs): 00:21:54.415 READ: bw=109MiB/s (115MB/s), 36.4MiB/s-36.5MiB/s (38.2MB/s-38.3MB/s), io=547MiB (574MB), run=5003-5005msec 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 bdev_null0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 [2024-12-06 09:59:18.862131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 bdev_null1 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 bdev_null2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.415 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:54.416 { 00:21:54.416 "params": { 00:21:54.416 "name": "Nvme$subsystem", 00:21:54.416 "trtype": "$TEST_TRANSPORT", 00:21:54.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.416 "adrfam": "ipv4", 00:21:54.416 "trsvcid": "$NVMF_PORT", 00:21:54.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.416 "hdgst": ${hdgst:-false}, 00:21:54.416 "ddgst": ${ddgst:-false} 00:21:54.416 }, 00:21:54.416 "method": "bdev_nvme_attach_controller" 00:21:54.416 } 00:21:54.416 EOF 00:21:54.416 )") 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:54.416 { 00:21:54.416 "params": { 00:21:54.416 "name": "Nvme$subsystem", 00:21:54.416 "trtype": "$TEST_TRANSPORT", 00:21:54.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.416 "adrfam": "ipv4", 00:21:54.416 "trsvcid": "$NVMF_PORT", 00:21:54.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.416 "hdgst": ${hdgst:-false}, 00:21:54.416 "ddgst": ${ddgst:-false} 00:21:54.416 }, 00:21:54.416 "method": "bdev_nvme_attach_controller" 00:21:54.416 } 00:21:54.416 EOF 00:21:54.416 )") 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:54.416 { 00:21:54.416 "params": { 00:21:54.416 "name": "Nvme$subsystem", 00:21:54.416 "trtype": "$TEST_TRANSPORT", 00:21:54.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.416 "adrfam": "ipv4", 00:21:54.416 "trsvcid": "$NVMF_PORT", 00:21:54.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.416 "hdgst": ${hdgst:-false}, 00:21:54.416 "ddgst": ${ddgst:-false} 00:21:54.416 }, 00:21:54.416 "method": "bdev_nvme_attach_controller" 00:21:54.416 } 00:21:54.416 EOF 00:21:54.416 )") 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:54.416 "params": { 00:21:54.416 "name": "Nvme0", 00:21:54.416 "trtype": "tcp", 00:21:54.416 "traddr": "10.0.0.3", 00:21:54.416 "adrfam": "ipv4", 00:21:54.416 "trsvcid": "4420", 00:21:54.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:54.416 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:54.416 "hdgst": false, 00:21:54.416 "ddgst": false 00:21:54.416 }, 00:21:54.416 "method": "bdev_nvme_attach_controller" 00:21:54.416 },{ 00:21:54.416 "params": { 00:21:54.416 "name": "Nvme1", 00:21:54.416 "trtype": "tcp", 00:21:54.416 "traddr": "10.0.0.3", 00:21:54.416 "adrfam": "ipv4", 00:21:54.416 "trsvcid": "4420", 00:21:54.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:54.416 "hdgst": false, 00:21:54.416 "ddgst": false 00:21:54.416 }, 00:21:54.416 "method": "bdev_nvme_attach_controller" 00:21:54.416 },{ 00:21:54.416 "params": { 00:21:54.416 "name": "Nvme2", 00:21:54.416 "trtype": "tcp", 00:21:54.416 "traddr": "10.0.0.3", 00:21:54.416 "adrfam": "ipv4", 00:21:54.416 "trsvcid": "4420", 00:21:54.416 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:54.416 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:54.416 "hdgst": false, 00:21:54.416 "ddgst": false 00:21:54.416 }, 00:21:54.416 "method": "bdev_nvme_attach_controller" 00:21:54.416 }' 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:54.416 09:59:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:54.416 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:54.416 ... 00:21:54.416 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:54.416 ... 00:21:54.416 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:54.416 ... 00:21:54.416 fio-3.35 00:21:54.416 Starting 24 threads 00:22:06.625 00:22:06.625 filename0: (groupid=0, jobs=1): err= 0: pid=83325: Fri Dec 6 09:59:29 2024 00:22:06.625 read: IOPS=262, BW=1049KiB/s (1074kB/s)(10.2MiB/10002msec) 00:22:06.625 slat (usec): min=5, max=4042, avg=28.21, stdev=153.27 00:22:06.625 clat (usec): min=1471, max=141755, avg=60846.25, stdev=21548.77 00:22:06.625 lat (usec): min=1479, max=141774, avg=60874.46, stdev=21549.70 00:22:06.625 clat percentiles (msec): 00:22:06.625 | 1.00th=[ 4], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 44], 00:22:06.625 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 66], 00:22:06.625 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 97], 00:22:06.625 | 99.00th=[ 118], 99.50th=[ 124], 99.90th=[ 138], 99.95th=[ 142], 00:22:06.625 | 99.99th=[ 142] 00:22:06.626 bw ( KiB/s): min= 600, max= 1272, per=4.21%, avg=1013.00, stdev=189.72, samples=19 00:22:06.626 iops : min= 150, max= 318, avg=253.21, stdev=47.42, samples=19 00:22:06.626 lat (msec) : 2=0.23%, 4=1.45%, 10=1.11%, 20=0.30%, 50=32.29% 00:22:06.626 lat (msec) : 100=60.50%, 250=4.12% 00:22:06.626 cpu : usr=43.57%, sys=1.49%, ctx=1463, majf=0, minf=9 00:22:06.626 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=79.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:22:06.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 issued rwts: total=2623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.626 filename0: (groupid=0, jobs=1): err= 0: pid=83326: Fri Dec 6 09:59:29 2024 00:22:06.626 read: IOPS=246, BW=988KiB/s (1011kB/s)(9896KiB/10020msec) 00:22:06.626 slat (usec): min=5, max=13026, avg=39.19, stdev=342.89 00:22:06.626 clat (msec): min=16, max=152, avg=64.54, stdev=20.77 00:22:06.626 lat (msec): min=16, max=152, avg=64.58, stdev=20.76 00:22:06.626 clat percentiles (msec): 00:22:06.626 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 00:22:06.626 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:22:06.626 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 102], 00:22:06.626 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 153], 00:22:06.626 | 99.99th=[ 153] 00:22:06.626 bw ( KiB/s): min= 528, max= 1280, per=4.08%, avg=982.80, stdev=207.27, samples=20 00:22:06.626 iops : min= 132, max= 320, avg=245.65, stdev=51.86, samples=20 00:22:06.626 lat (msec) : 20=0.08%, 50=29.91%, 100=63.95%, 250=6.06% 00:22:06.626 cpu : usr=45.02%, sys=1.60%, ctx=1121, majf=0, minf=9 00:22:06.626 IO depths : 1=0.1%, 2=1.9%, 4=7.1%, 8=76.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:22:06.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 complete : 0=0.0%, 4=88.9%, 8=9.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.626 filename0: (groupid=0, jobs=1): err= 0: pid=83327: Fri Dec 6 09:59:29 2024 00:22:06.626 read: IOPS=239, BW=959KiB/s (982kB/s)(9604KiB/10013msec) 00:22:06.626 slat (usec): min=4, max=8040, avg=39.23, stdev=316.93 00:22:06.626 clat (msec): min=19, max=143, avg=66.50, stdev=22.34 00:22:06.626 lat (msec): min=19, max=143, avg=66.54, stdev=22.34 00:22:06.626 clat percentiles (msec): 00:22:06.626 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:22:06.626 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 69], 00:22:06.626 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 107], 00:22:06.626 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:22:06.626 | 99.99th=[ 144] 00:22:06.626 bw ( KiB/s): min= 638, max= 1520, per=3.98%, avg=956.80, stdev=216.11, samples=20 00:22:06.626 iops : min= 159, max= 380, avg=239.15, stdev=54.05, samples=20 00:22:06.626 lat (msec) : 20=0.25%, 50=23.95%, 100=67.47%, 250=8.33% 00:22:06.626 cpu : usr=41.60%, sys=1.63%, ctx=1225, majf=0, minf=9 00:22:06.626 IO depths : 1=0.1%, 2=1.7%, 4=7.0%, 8=75.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:06.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 complete : 0=0.0%, 4=89.4%, 8=9.1%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 issued rwts: total=2401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.626 filename0: (groupid=0, jobs=1): err= 0: pid=83328: Fri Dec 6 09:59:29 2024 00:22:06.626 read: IOPS=259, BW=1040KiB/s (1065kB/s)(10.2MiB/10004msec) 00:22:06.626 slat (usec): min=4, max=8030, avg=30.96, stdev=235.93 00:22:06.626 clat (msec): min=3, max=138, avg=61.38, stdev=21.21 00:22:06.626 lat (msec): min=3, max=138, avg=61.41, stdev=21.21 00:22:06.626 clat percentiles (msec): 00:22:06.626 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 38], 20.00th=[ 45], 00:22:06.626 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 62], 60.00th=[ 67], 00:22:06.626 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 99], 00:22:06.626 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 134], 99.95th=[ 138], 00:22:06.626 | 99.99th=[ 138] 00:22:06.626 bw ( KiB/s): min= 528, max= 1536, per=4.23%, avg=1016.74, stdev=215.57, samples=19 00:22:06.626 iops : min= 132, max= 384, avg=254.16, stdev=53.92, samples=19 00:22:06.626 lat (msec) : 4=0.12%, 10=1.35%, 20=0.27%, 50=32.37%, 100=61.94% 00:22:06.626 lat (msec) : 250=3.96% 00:22:06.626 cpu : usr=41.37%, sys=1.61%, ctx=1263, majf=0, minf=9 00:22:06.626 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=77.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:22:06.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.626 filename0: (groupid=0, jobs=1): err= 0: pid=83329: Fri Dec 6 09:59:29 2024 00:22:06.626 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10055msec) 00:22:06.626 slat (usec): min=6, max=8033, avg=24.91, stdev=235.42 00:22:06.626 clat (usec): min=830, max=142517, avg=61402.71, stdev=23355.75 00:22:06.626 lat (usec): min=838, max=142543, avg=61427.61, stdev=23357.54 00:22:06.626 clat percentiles (msec): 00:22:06.626 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 26], 20.00th=[ 47], 00:22:06.626 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 70], 00:22:06.626 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 96], 00:22:06.626 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 142], 00:22:06.626 | 99.99th=[ 142] 00:22:06.626 bw ( KiB/s): min= 638, max= 2888, per=4.31%, avg=1036.70, stdev=450.48, samples=20 00:22:06.626 iops : min= 159, max= 722, avg=259.15, stdev=112.64, samples=20 00:22:06.626 lat (usec) : 1000=0.08% 00:22:06.626 lat (msec) : 2=0.08%, 4=2.14%, 10=1.91%, 20=2.91%, 50=19.53% 00:22:06.626 lat (msec) : 100=70.05%, 250=3.29% 00:22:06.626 cpu : usr=34.02%, sys=1.24%, ctx=959, majf=0, minf=0 00:22:06.626 IO depths : 1=0.2%, 2=1.6%, 4=5.6%, 8=76.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:22:06.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 issued rwts: total=2611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.626 filename0: (groupid=0, jobs=1): err= 0: pid=83330: Fri Dec 6 09:59:29 2024 00:22:06.626 read: IOPS=266, BW=1065KiB/s (1091kB/s)(10.4MiB/10001msec) 00:22:06.626 slat (usec): min=5, max=8037, avg=33.72, stdev=274.19 00:22:06.626 clat (usec): min=795, max=145369, avg=59937.42, stdev=23553.86 00:22:06.626 lat (usec): min=802, max=145389, avg=59971.13, stdev=23552.36 00:22:06.626 clat percentiles (usec): 00:22:06.626 | 1.00th=[ 1434], 5.00th=[ 21627], 10.00th=[ 35914], 20.00th=[ 43254], 00:22:06.626 | 30.00th=[ 47973], 40.00th=[ 54264], 50.00th=[ 60031], 60.00th=[ 65274], 00:22:06.626 | 70.00th=[ 70779], 80.00th=[ 74974], 90.00th=[ 90702], 95.00th=[ 98042], 00:22:06.626 | 99.00th=[120062], 99.50th=[130548], 99.90th=[130548], 99.95th=[145753], 00:22:06.626 | 99.99th=[145753] 00:22:06.626 bw ( KiB/s): min= 528, max= 1264, per=4.18%, avg=1006.63, stdev=189.68, samples=19 00:22:06.626 iops : min= 132, max= 316, avg=251.63, stdev=47.39, samples=19 00:22:06.626 lat (usec) : 1000=0.23% 00:22:06.626 lat (msec) : 2=1.92%, 4=1.65%, 10=1.09%, 50=31.09%, 100=59.22% 00:22:06.626 lat (msec) : 250=4.81% 00:22:06.626 cpu : usr=38.51%, sys=1.55%, ctx=1120, majf=0, minf=9 00:22:06.626 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:22:06.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.626 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 issued rwts: total=2663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.627 filename0: (groupid=0, jobs=1): err= 0: pid=83331: Fri Dec 6 09:59:29 2024 00:22:06.627 read: IOPS=233, BW=935KiB/s (957kB/s)(9392KiB/10050msec) 00:22:06.627 slat (usec): min=5, max=8032, avg=25.01, stdev=234.00 00:22:06.627 clat (msec): min=9, max=144, avg=68.29, stdev=24.09 00:22:06.627 lat (msec): min=9, max=144, avg=68.31, stdev=24.08 00:22:06.627 clat percentiles (msec): 00:22:06.627 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 38], 20.00th=[ 48], 00:22:06.627 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 71], 00:22:06.627 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 108], 00:22:06.627 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:22:06.627 | 99.99th=[ 144] 00:22:06.627 bw ( KiB/s): min= 624, max= 1936, per=3.88%, avg=932.70, stdev=284.46, samples=20 00:22:06.627 iops : min= 156, max= 484, avg=233.15, stdev=71.14, samples=20 00:22:06.627 lat (msec) : 10=0.38%, 20=3.19%, 50=17.89%, 100=69.72%, 250=8.82% 00:22:06.627 cpu : usr=31.79%, sys=1.13%, ctx=942, majf=0, minf=9 00:22:06.627 IO depths : 1=0.2%, 2=2.6%, 4=9.9%, 8=71.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:22:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 complete : 0=0.0%, 4=90.5%, 8=7.3%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.627 filename0: (groupid=0, jobs=1): err= 0: pid=83332: Fri Dec 6 09:59:29 2024 00:22:06.627 read: IOPS=244, BW=980KiB/s (1003kB/s)(9828KiB/10031msec) 00:22:06.627 slat (usec): min=5, max=8043, avg=36.91, stdev=304.21 00:22:06.627 clat (msec): min=23, max=126, avg=65.03, stdev=19.40 00:22:06.627 lat (msec): min=23, max=126, avg=65.07, stdev=19.40 00:22:06.627 clat percentiles (msec): 00:22:06.627 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 47], 00:22:06.627 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 69], 00:22:06.627 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 101], 00:22:06.627 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 126], 99.95th=[ 126], 00:22:06.627 | 99.99th=[ 127] 00:22:06.627 bw ( KiB/s): min= 656, max= 1536, per=4.07%, avg=979.10, stdev=183.08, samples=20 00:22:06.627 iops : min= 164, max= 384, avg=244.75, stdev=45.78, samples=20 00:22:06.627 lat (msec) : 50=25.80%, 100=69.03%, 250=5.17% 00:22:06.627 cpu : usr=41.24%, sys=1.54%, ctx=1359, majf=0, minf=9 00:22:06.627 IO depths : 1=0.2%, 2=1.1%, 4=4.0%, 8=78.6%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.627 filename1: (groupid=0, jobs=1): err= 0: pid=83333: Fri Dec 6 09:59:29 2024 00:22:06.627 read: IOPS=247, BW=992KiB/s (1015kB/s)(9936KiB/10021msec) 00:22:06.627 slat (usec): min=4, max=9038, avg=31.96, stdev=292.07 00:22:06.627 clat (msec): min=16, max=130, avg=64.37, stdev=19.41 00:22:06.627 lat (msec): min=16, max=130, avg=64.40, stdev=19.42 00:22:06.627 clat percentiles (msec): 00:22:06.627 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:22:06.627 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69], 00:22:06.627 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 92], 95.00th=[ 100], 00:22:06.627 | 99.00th=[ 118], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 131], 00:22:06.627 | 99.99th=[ 131] 00:22:06.627 bw ( KiB/s): min= 528, max= 1408, per=4.11%, avg=988.80, stdev=184.53, samples=20 00:22:06.627 iops : min= 132, max= 352, avg=247.20, stdev=46.13, samples=20 00:22:06.627 lat (msec) : 20=0.08%, 50=26.81%, 100=68.28%, 250=4.83% 00:22:06.627 cpu : usr=38.39%, sys=1.37%, ctx=1124, majf=0, minf=9 00:22:06.627 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 issued rwts: total=2484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.627 filename1: (groupid=0, jobs=1): err= 0: pid=83334: Fri Dec 6 09:59:29 2024 00:22:06.627 read: IOPS=253, BW=1014KiB/s (1039kB/s)(9.92MiB/10011msec) 00:22:06.627 slat (usec): min=4, max=8059, avg=35.26, stdev=299.36 00:22:06.627 clat (msec): min=13, max=136, avg=62.87, stdev=18.84 00:22:06.627 lat (msec): min=13, max=136, avg=62.91, stdev=18.84 00:22:06.627 clat percentiles (msec): 00:22:06.627 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 46], 00:22:06.627 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:22:06.627 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 96], 00:22:06.627 | 99.00th=[ 108], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 136], 00:22:06.627 | 99.99th=[ 136] 00:22:06.627 bw ( KiB/s): min= 656, max= 1500, per=4.20%, avg=1009.84, stdev=196.59, samples=19 00:22:06.627 iops : min= 164, max= 375, avg=252.42, stdev=49.20, samples=19 00:22:06.627 lat (msec) : 20=0.67%, 50=30.88%, 100=65.42%, 250=3.03% 00:22:06.627 cpu : usr=42.07%, sys=1.52%, ctx=1236, majf=0, minf=9 00:22:06.627 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:22:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 complete : 0=0.0%, 4=88.5%, 8=10.4%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 issued rwts: total=2539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.627 filename1: (groupid=0, jobs=1): err= 0: pid=83335: Fri Dec 6 09:59:29 2024 00:22:06.627 read: IOPS=250, BW=1000KiB/s (1024kB/s)(9.78MiB/10010msec) 00:22:06.627 slat (usec): min=5, max=4001, avg=21.62, stdev=80.29 00:22:06.627 clat (msec): min=12, max=131, avg=63.88, stdev=20.19 00:22:06.627 lat (msec): min=12, max=131, avg=63.91, stdev=20.19 00:22:06.627 clat percentiles (msec): 00:22:06.627 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 46], 00:22:06.627 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 67], 00:22:06.627 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 94], 95.00th=[ 102], 00:22:06.627 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 132], 00:22:06.627 | 99.99th=[ 132] 00:22:06.627 bw ( KiB/s): min= 624, max= 1408, per=4.13%, avg=994.89, stdev=191.36, samples=19 00:22:06.627 iops : min= 156, max= 352, avg=248.68, stdev=47.91, samples=19 00:22:06.627 lat (msec) : 20=0.08%, 50=27.65%, 100=67.20%, 250=5.07% 00:22:06.627 cpu : usr=39.01%, sys=1.39%, ctx=1381, majf=0, minf=9 00:22:06.627 IO depths : 1=0.1%, 2=1.2%, 4=4.3%, 8=78.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:06.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.627 issued rwts: total=2503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.627 filename1: (groupid=0, jobs=1): err= 0: pid=83336: Fri Dec 6 09:59:29 2024 00:22:06.627 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.99MiB/10046msec) 00:22:06.627 slat (usec): min=6, max=12046, avg=31.22, stdev=315.10 00:22:06.627 clat (msec): min=6, max=118, avg=62.64, stdev=19.01 00:22:06.627 lat (msec): min=6, max=118, avg=62.67, stdev=19.00 00:22:06.627 clat percentiles (msec): 00:22:06.627 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 40], 20.00th=[ 48], 00:22:06.628 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 68], 00:22:06.628 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 88], 95.00th=[ 95], 00:22:06.628 | 99.00th=[ 107], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 116], 00:22:06.628 | 99.99th=[ 118] 00:22:06.628 bw ( KiB/s): min= 792, max= 1928, per=4.23%, avg=1016.30, stdev=233.21, samples=20 00:22:06.628 iops : min= 198, max= 482, avg=254.05, stdev=58.32, samples=20 00:22:06.628 lat (msec) : 10=0.08%, 20=1.92%, 50=23.00%, 100=72.39%, 250=2.62% 00:22:06.628 cpu : usr=38.78%, sys=1.57%, ctx=1199, majf=0, minf=9 00:22:06.628 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:22:06.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 issued rwts: total=2557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.628 filename1: (groupid=0, jobs=1): err= 0: pid=83337: Fri Dec 6 09:59:29 2024 00:22:06.628 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10059msec) 00:22:06.628 slat (usec): min=6, max=8040, avg=31.27, stdev=310.69 00:22:06.628 clat (usec): min=927, max=140079, avg=61129.95, stdev=22841.73 00:22:06.628 lat (usec): min=934, max=140089, avg=61161.22, stdev=22848.95 00:22:06.628 clat percentiles (msec): 00:22:06.628 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 26], 20.00th=[ 47], 00:22:06.628 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 69], 00:22:06.628 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 95], 00:22:06.628 | 99.00th=[ 106], 99.50th=[ 111], 99.90th=[ 112], 99.95th=[ 125], 00:22:06.628 | 99.99th=[ 140] 00:22:06.628 bw ( KiB/s): min= 808, max= 2793, per=4.33%, avg=1041.15, stdev=420.41, samples=20 00:22:06.628 iops : min= 202, max= 698, avg=260.25, stdev=105.06, samples=20 00:22:06.628 lat (usec) : 1000=0.08% 00:22:06.628 lat (msec) : 4=1.07%, 10=3.05%, 20=3.28%, 50=18.23%, 100=71.74% 00:22:06.628 lat (msec) : 250=2.56% 00:22:06.628 cpu : usr=37.51%, sys=1.51%, ctx=1258, majf=0, minf=0 00:22:06.628 IO depths : 1=0.2%, 2=1.0%, 4=3.6%, 8=78.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:22:06.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 issued rwts: total=2622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.628 filename1: (groupid=0, jobs=1): err= 0: pid=83338: Fri Dec 6 09:59:29 2024 00:22:06.628 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.77MiB/10013msec) 00:22:06.628 slat (usec): min=5, max=8054, avg=34.95, stdev=340.19 00:22:06.628 clat (msec): min=19, max=131, avg=63.87, stdev=17.90 00:22:06.628 lat (msec): min=19, max=131, avg=63.90, stdev=17.91 00:22:06.628 clat percentiles (msec): 00:22:06.628 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:22:06.628 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 69], 00:22:06.628 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 96], 00:22:06.628 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 122], 99.95th=[ 123], 00:22:06.628 | 99.99th=[ 132] 00:22:06.628 bw ( KiB/s): min= 736, max= 1472, per=4.14%, avg=996.80, stdev=144.27, samples=20 00:22:06.628 iops : min= 184, max= 368, avg=249.15, stdev=36.08, samples=20 00:22:06.628 lat (msec) : 20=0.08%, 50=24.55%, 100=73.41%, 250=1.96% 00:22:06.628 cpu : usr=32.57%, sys=0.99%, ctx=956, majf=0, minf=9 00:22:06.628 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.0%, 16=16.6%, 32=0.0%, >=64=0.0% 00:22:06.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 issued rwts: total=2501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.628 filename1: (groupid=0, jobs=1): err= 0: pid=83339: Fri Dec 6 09:59:29 2024 00:22:06.628 read: IOPS=257, BW=1029KiB/s (1054kB/s)(10.1MiB/10035msec) 00:22:06.628 slat (usec): min=3, max=8045, avg=32.94, stdev=300.90 00:22:06.628 clat (msec): min=7, max=125, avg=61.99, stdev=19.95 00:22:06.628 lat (msec): min=7, max=125, avg=62.02, stdev=19.94 00:22:06.628 clat percentiles (msec): 00:22:06.628 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 47], 00:22:06.628 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:22:06.628 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 89], 95.00th=[ 95], 00:22:06.628 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 116], 99.95th=[ 121], 00:22:06.628 | 99.99th=[ 126] 00:22:06.628 bw ( KiB/s): min= 720, max= 2032, per=4.27%, avg=1026.30, stdev=257.52, samples=20 00:22:06.628 iops : min= 180, max= 508, avg=256.55, stdev=64.40, samples=20 00:22:06.628 lat (msec) : 10=0.31%, 20=2.67%, 50=26.30%, 100=68.09%, 250=2.63% 00:22:06.628 cpu : usr=32.89%, sys=1.22%, ctx=1046, majf=0, minf=9 00:22:06.628 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.0%, 16=16.7%, 32=0.0%, >=64=0.0% 00:22:06.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 issued rwts: total=2582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.628 filename1: (groupid=0, jobs=1): err= 0: pid=83340: Fri Dec 6 09:59:29 2024 00:22:06.628 read: IOPS=261, BW=1047KiB/s (1072kB/s)(10.2MiB/10008msec) 00:22:06.628 slat (usec): min=5, max=8055, avg=33.13, stdev=286.98 00:22:06.628 clat (msec): min=5, max=121, avg=60.96, stdev=19.04 00:22:06.628 lat (msec): min=5, max=121, avg=60.99, stdev=19.05 00:22:06.628 clat percentiles (msec): 00:22:06.628 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 39], 20.00th=[ 46], 00:22:06.628 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:22:06.628 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 88], 95.00th=[ 95], 00:22:06.628 | 99.00th=[ 107], 99.50th=[ 122], 99.90th=[ 122], 99.95th=[ 122], 00:22:06.628 | 99.99th=[ 122] 00:22:06.628 bw ( KiB/s): min= 672, max= 1520, per=4.31%, avg=1036.53, stdev=168.93, samples=19 00:22:06.628 iops : min= 168, max= 380, avg=259.11, stdev=42.25, samples=19 00:22:06.628 lat (msec) : 10=0.38%, 20=0.23%, 50=30.28%, 100=66.48%, 250=2.63% 00:22:06.628 cpu : usr=37.45%, sys=1.30%, ctx=1307, majf=0, minf=9 00:22:06.628 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:22:06.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 issued rwts: total=2619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.628 filename2: (groupid=0, jobs=1): err= 0: pid=83341: Fri Dec 6 09:59:29 2024 00:22:06.628 read: IOPS=250, BW=1001KiB/s (1025kB/s)(9.83MiB/10051msec) 00:22:06.628 slat (usec): min=7, max=8029, avg=21.41, stdev=160.01 00:22:06.628 clat (msec): min=6, max=130, avg=63.73, stdev=21.95 00:22:06.628 lat (msec): min=6, max=130, avg=63.75, stdev=21.94 00:22:06.628 clat percentiles (msec): 00:22:06.628 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 39], 20.00th=[ 47], 00:22:06.628 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 69], 00:22:06.628 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 104], 00:22:06.628 | 99.00th=[ 120], 99.50th=[ 127], 99.90th=[ 129], 99.95th=[ 129], 00:22:06.628 | 99.99th=[ 131] 00:22:06.628 bw ( KiB/s): min= 638, max= 2160, per=4.15%, avg=999.90, stdev=307.30, samples=20 00:22:06.628 iops : min= 159, max= 540, avg=249.95, stdev=76.86, samples=20 00:22:06.628 lat (msec) : 10=1.27%, 20=1.83%, 50=21.94%, 100=69.32%, 250=5.64% 00:22:06.628 cpu : usr=40.19%, sys=1.53%, ctx=1142, majf=0, minf=9 00:22:06.628 IO depths : 1=0.1%, 2=1.6%, 4=6.0%, 8=76.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:06.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 complete : 0=0.0%, 4=89.2%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.628 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.629 filename2: (groupid=0, jobs=1): err= 0: pid=83342: Fri Dec 6 09:59:29 2024 00:22:06.629 read: IOPS=251, BW=1008KiB/s (1032kB/s)(9.86MiB/10024msec) 00:22:06.629 slat (usec): min=5, max=8035, avg=29.96, stdev=243.90 00:22:06.629 clat (msec): min=22, max=121, avg=63.33, stdev=18.39 00:22:06.629 lat (msec): min=22, max=121, avg=63.36, stdev=18.40 00:22:06.629 clat percentiles (msec): 00:22:06.629 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 47], 00:22:06.629 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:22:06.629 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 91], 95.00th=[ 95], 00:22:06.629 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:22:06.629 | 99.99th=[ 123] 00:22:06.629 bw ( KiB/s): min= 792, max= 1536, per=4.18%, avg=1004.90, stdev=165.78, samples=20 00:22:06.629 iops : min= 198, max= 384, avg=251.20, stdev=41.45, samples=20 00:22:06.629 lat (msec) : 50=27.25%, 100=69.35%, 250=3.41% 00:22:06.629 cpu : usr=37.39%, sys=1.54%, ctx=1459, majf=0, minf=9 00:22:06.629 IO depths : 1=0.1%, 2=0.6%, 4=2.6%, 8=80.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:22:06.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 issued rwts: total=2525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.629 filename2: (groupid=0, jobs=1): err= 0: pid=83343: Fri Dec 6 09:59:29 2024 00:22:06.629 read: IOPS=245, BW=983KiB/s (1007kB/s)(9872KiB/10041msec) 00:22:06.629 slat (usec): min=5, max=11026, avg=30.59, stdev=303.15 00:22:06.629 clat (msec): min=6, max=144, avg=64.91, stdev=23.32 00:22:06.629 lat (msec): min=6, max=144, avg=64.94, stdev=23.32 00:22:06.629 clat percentiles (msec): 00:22:06.629 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 48], 00:22:06.629 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:22:06.629 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 106], 00:22:06.629 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 140], 99.95th=[ 140], 00:22:06.629 | 99.99th=[ 144] 00:22:06.629 bw ( KiB/s): min= 638, max= 2304, per=4.08%, avg=981.90, stdev=343.91, samples=20 00:22:06.629 iops : min= 159, max= 576, avg=245.45, stdev=86.00, samples=20 00:22:06.629 lat (msec) : 10=1.86%, 20=2.11%, 50=19.65%, 100=70.26%, 250=6.12% 00:22:06.629 cpu : usr=34.57%, sys=0.99%, ctx=967, majf=0, minf=9 00:22:06.629 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=76.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:22:06.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 complete : 0=0.0%, 4=89.4%, 8=9.3%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 issued rwts: total=2468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.629 filename2: (groupid=0, jobs=1): err= 0: pid=83344: Fri Dec 6 09:59:29 2024 00:22:06.629 read: IOPS=251, BW=1007KiB/s (1031kB/s)(9.87MiB/10034msec) 00:22:06.629 slat (usec): min=5, max=8030, avg=32.51, stdev=275.42 00:22:06.629 clat (msec): min=15, max=134, avg=63.35, stdev=18.41 00:22:06.629 lat (msec): min=15, max=134, avg=63.39, stdev=18.41 00:22:06.629 clat percentiles (msec): 00:22:06.629 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:22:06.629 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 69], 00:22:06.629 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 91], 95.00th=[ 96], 00:22:06.629 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 118], 99.95th=[ 134], 00:22:06.629 | 99.99th=[ 134] 00:22:06.629 bw ( KiB/s): min= 744, max= 1536, per=4.17%, avg=1003.90, stdev=172.52, samples=20 00:22:06.629 iops : min= 186, max= 384, avg=250.95, stdev=43.17, samples=20 00:22:06.629 lat (msec) : 20=0.08%, 50=27.40%, 100=69.79%, 250=2.73% 00:22:06.629 cpu : usr=35.45%, sys=1.29%, ctx=967, majf=0, minf=9 00:22:06.629 IO depths : 1=0.2%, 2=1.1%, 4=4.0%, 8=78.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:22:06.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.629 filename2: (groupid=0, jobs=1): err= 0: pid=83345: Fri Dec 6 09:59:29 2024 00:22:06.629 read: IOPS=245, BW=981KiB/s (1004kB/s)(9824KiB/10018msec) 00:22:06.629 slat (usec): min=5, max=8080, avg=35.66, stdev=321.66 00:22:06.629 clat (msec): min=20, max=138, avg=65.05, stdev=20.10 00:22:06.629 lat (msec): min=20, max=138, avg=65.08, stdev=20.10 00:22:06.629 clat percentiles (msec): 00:22:06.629 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:22:06.629 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:22:06.629 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 105], 00:22:06.629 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 130], 99.95th=[ 138], 00:22:06.629 | 99.99th=[ 138] 00:22:06.629 bw ( KiB/s): min= 640, max= 1280, per=4.06%, avg=975.75, stdev=170.47, samples=20 00:22:06.629 iops : min= 160, max= 320, avg=243.90, stdev=42.66, samples=20 00:22:06.629 lat (msec) : 50=26.18%, 100=67.51%, 250=6.31% 00:22:06.629 cpu : usr=43.69%, sys=1.61%, ctx=1288, majf=0, minf=9 00:22:06.629 IO depths : 1=0.2%, 2=1.5%, 4=5.5%, 8=77.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:22:06.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 issued rwts: total=2456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.629 filename2: (groupid=0, jobs=1): err= 0: pid=83346: Fri Dec 6 09:59:29 2024 00:22:06.629 read: IOPS=245, BW=983KiB/s (1007kB/s)(9844KiB/10015msec) 00:22:06.629 slat (usec): min=4, max=8033, avg=37.78, stdev=368.96 00:22:06.629 clat (msec): min=15, max=142, avg=64.91, stdev=19.50 00:22:06.629 lat (msec): min=15, max=142, avg=64.95, stdev=19.50 00:22:06.629 clat percentiles (msec): 00:22:06.629 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:22:06.629 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 69], 00:22:06.629 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 101], 00:22:06.629 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 142], 00:22:06.629 | 99.99th=[ 142] 00:22:06.629 bw ( KiB/s): min= 638, max= 1296, per=4.07%, avg=979.35, stdev=152.48, samples=20 00:22:06.629 iops : min= 159, max= 324, avg=244.80, stdev=38.17, samples=20 00:22:06.629 lat (msec) : 20=0.08%, 50=25.56%, 100=69.36%, 250=5.00% 00:22:06.629 cpu : usr=32.01%, sys=1.31%, ctx=1007, majf=0, minf=9 00:22:06.629 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:06.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.629 issued rwts: total=2461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.629 filename2: (groupid=0, jobs=1): err= 0: pid=83347: Fri Dec 6 09:59:29 2024 00:22:06.629 read: IOPS=241, BW=966KiB/s (989kB/s)(9696KiB/10036msec) 00:22:06.629 slat (usec): min=6, max=8045, avg=25.98, stdev=198.19 00:22:06.629 clat (msec): min=5, max=144, avg=66.03, stdev=21.90 00:22:06.629 lat (msec): min=5, max=144, avg=66.05, stdev=21.90 00:22:06.629 clat percentiles (msec): 00:22:06.629 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 48], 00:22:06.629 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:22:06.629 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 106], 00:22:06.629 | 99.00th=[ 128], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:22:06.629 | 99.99th=[ 144] 00:22:06.629 bw ( KiB/s): min= 616, max= 1768, per=4.01%, avg=965.50, stdev=240.89, samples=20 00:22:06.629 iops : min= 154, max= 442, avg=241.35, stdev=60.26, samples=20 00:22:06.629 lat (msec) : 10=0.08%, 20=1.16%, 50=21.33%, 100=72.03%, 250=5.40% 00:22:06.629 cpu : usr=32.63%, sys=1.04%, ctx=949, majf=0, minf=9 00:22:06.630 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=77.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:22:06.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.630 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.630 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.630 filename2: (groupid=0, jobs=1): err= 0: pid=83348: Fri Dec 6 09:59:29 2024 00:22:06.630 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.78MiB/10030msec) 00:22:06.630 slat (usec): min=6, max=8050, avg=37.72, stdev=358.09 00:22:06.630 clat (msec): min=16, max=137, avg=63.86, stdev=20.09 00:22:06.630 lat (msec): min=16, max=137, avg=63.90, stdev=20.10 00:22:06.630 clat percentiles (msec): 00:22:06.630 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 40], 20.00th=[ 47], 00:22:06.630 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 70], 00:22:06.630 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 100], 00:22:06.630 | 99.00th=[ 110], 99.50th=[ 115], 99.90th=[ 123], 99.95th=[ 138], 00:22:06.630 | 99.99th=[ 138] 00:22:06.630 bw ( KiB/s): min= 656, max= 1772, per=4.13%, avg=994.90, stdev=224.54, samples=20 00:22:06.630 iops : min= 164, max= 443, avg=248.70, stdev=56.16, samples=20 00:22:06.630 lat (msec) : 20=0.64%, 50=28.08%, 100=66.97%, 250=4.31% 00:22:06.630 cpu : usr=33.02%, sys=1.23%, ctx=932, majf=0, minf=9 00:22:06.630 IO depths : 1=0.1%, 2=1.2%, 4=4.4%, 8=78.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:22:06.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.630 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.630 issued rwts: total=2504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:06.630 00:22:06.630 Run status group 0 (all jobs): 00:22:06.630 READ: bw=23.5MiB/s (24.6MB/s), 935KiB/s-1065KiB/s (957kB/s-1091kB/s), io=236MiB (248MB), run=10001-10059msec 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 bdev_null0 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 [2024-12-06 09:59:30.286868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.630 bdev_null1 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.630 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.631 { 00:22:06.631 "params": { 00:22:06.631 "name": "Nvme$subsystem", 00:22:06.631 "trtype": "$TEST_TRANSPORT", 00:22:06.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.631 "adrfam": "ipv4", 00:22:06.631 "trsvcid": "$NVMF_PORT", 00:22:06.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.631 "hdgst": ${hdgst:-false}, 00:22:06.631 "ddgst": ${ddgst:-false} 00:22:06.631 }, 00:22:06.631 "method": "bdev_nvme_attach_controller" 00:22:06.631 } 00:22:06.631 EOF 00:22:06.631 )") 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.631 { 00:22:06.631 "params": { 00:22:06.631 "name": "Nvme$subsystem", 00:22:06.631 "trtype": "$TEST_TRANSPORT", 00:22:06.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.631 "adrfam": "ipv4", 00:22:06.631 "trsvcid": "$NVMF_PORT", 00:22:06.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.631 "hdgst": ${hdgst:-false}, 00:22:06.631 "ddgst": ${ddgst:-false} 00:22:06.631 }, 00:22:06.631 "method": "bdev_nvme_attach_controller" 00:22:06.631 } 00:22:06.631 EOF 00:22:06.631 )") 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:06.631 "params": { 00:22:06.631 "name": "Nvme0", 00:22:06.631 "trtype": "tcp", 00:22:06.631 "traddr": "10.0.0.3", 00:22:06.631 "adrfam": "ipv4", 00:22:06.631 "trsvcid": "4420", 00:22:06.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:06.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:06.631 "hdgst": false, 00:22:06.631 "ddgst": false 00:22:06.631 }, 00:22:06.631 "method": "bdev_nvme_attach_controller" 00:22:06.631 },{ 00:22:06.631 "params": { 00:22:06.631 "name": "Nvme1", 00:22:06.631 "trtype": "tcp", 00:22:06.631 "traddr": "10.0.0.3", 00:22:06.631 "adrfam": "ipv4", 00:22:06.631 "trsvcid": "4420", 00:22:06.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.631 "hdgst": false, 00:22:06.631 "ddgst": false 00:22:06.631 }, 00:22:06.631 "method": "bdev_nvme_attach_controller" 00:22:06.631 }' 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:06.631 09:59:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:06.631 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:06.631 ... 00:22:06.631 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:06.631 ... 00:22:06.631 fio-3.35 00:22:06.631 Starting 4 threads 00:22:11.904 00:22:11.904 filename0: (groupid=0, jobs=1): err= 0: pid=83486: Fri Dec 6 09:59:36 2024 00:22:11.904 read: IOPS=2169, BW=16.9MiB/s (17.8MB/s)(84.8MiB/5001msec) 00:22:11.904 slat (usec): min=6, max=106, avg=17.52, stdev=10.41 00:22:11.904 clat (usec): min=1041, max=7005, avg=3626.90, stdev=879.62 00:22:11.904 lat (usec): min=1048, max=7039, avg=3644.42, stdev=880.74 00:22:11.904 clat percentiles (usec): 00:22:11.904 | 1.00th=[ 1483], 5.00th=[ 1975], 10.00th=[ 2180], 20.00th=[ 2704], 00:22:11.904 | 30.00th=[ 3326], 40.00th=[ 3687], 50.00th=[ 3982], 60.00th=[ 4080], 00:22:11.904 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621], 00:22:11.904 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 6194], 99.95th=[ 6325], 00:22:11.904 | 99.99th=[ 6587] 00:22:11.904 bw ( KiB/s): min=14976, max=20184, per=23.51%, avg=17414.22, stdev=2109.95, samples=9 00:22:11.904 iops : min= 1872, max= 2523, avg=2176.78, stdev=263.74, samples=9 00:22:11.904 lat (msec) : 2=5.54%, 4=45.35%, 10=49.12% 00:22:11.904 cpu : usr=91.96%, sys=6.90%, ctx=54, majf=0, minf=0 00:22:11.904 IO depths : 1=1.1%, 2=12.8%, 4=57.4%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:11.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.904 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.904 issued rwts: total=10850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:11.904 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:11.904 filename0: (groupid=0, jobs=1): err= 0: pid=83487: Fri Dec 6 09:59:36 2024 00:22:11.904 read: IOPS=2366, BW=18.5MiB/s (19.4MB/s)(92.5MiB/5002msec) 00:22:11.904 slat (usec): min=3, max=195, avg=18.36, stdev=10.13 00:22:11.904 clat (usec): min=401, max=6958, avg=3326.82, stdev=979.69 00:22:11.904 lat (usec): min=414, max=6994, avg=3345.18, stdev=979.81 00:22:11.904 clat percentiles (usec): 00:22:11.904 | 1.00th=[ 1205], 5.00th=[ 1844], 10.00th=[ 2057], 20.00th=[ 2245], 00:22:11.904 | 30.00th=[ 2474], 40.00th=[ 3228], 50.00th=[ 3523], 60.00th=[ 3916], 00:22:11.904 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 4424], 95.00th=[ 4686], 00:22:11.904 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 5866], 99.95th=[ 6325], 00:22:11.904 | 99.99th=[ 6849] 00:22:11.904 bw ( KiB/s): min=16768, max=20880, per=25.52%, avg=18903.11, stdev=1450.81, samples=9 00:22:11.904 iops : min= 2096, max= 2610, avg=2362.89, stdev=181.35, samples=9 00:22:11.904 lat (usec) : 500=0.03%, 750=0.03%, 1000=0.13% 00:22:11.904 lat (msec) : 2=7.56%, 4=58.91%, 10=33.34% 00:22:11.904 cpu : usr=93.40%, sys=5.28%, ctx=88, majf=0, minf=0 00:22:11.904 IO depths : 1=0.8%, 2=6.6%, 4=60.4%, 8=32.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:11.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.904 complete : 0=0.0%, 4=97.4%, 8=2.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.904 issued rwts: total=11838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:11.904 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:11.904 filename1: (groupid=0, jobs=1): err= 0: pid=83488: Fri Dec 6 09:59:36 2024 00:22:11.904 read: IOPS=2366, BW=18.5MiB/s (19.4MB/s)(92.5MiB/5002msec) 00:22:11.904 slat (usec): min=6, max=107, avg=17.85, stdev=10.05 00:22:11.905 clat (usec): min=1015, max=7029, avg=3327.47, stdev=949.69 00:22:11.905 lat (usec): min=1022, max=7063, avg=3345.32, stdev=951.34 00:22:11.905 clat percentiles (usec): 00:22:11.905 | 1.00th=[ 1598], 5.00th=[ 1991], 10.00th=[ 2114], 20.00th=[ 2245], 00:22:11.905 | 30.00th=[ 2442], 40.00th=[ 2999], 50.00th=[ 3589], 60.00th=[ 3916], 00:22:11.905 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:22:11.905 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 6259], 99.95th=[ 6390], 00:22:11.905 | 99.99th=[ 6849] 00:22:11.905 bw ( KiB/s): min=16144, max=22192, per=25.94%, avg=19217.78, stdev=2065.57, samples=9 00:22:11.905 iops : min= 2018, max= 2774, avg=2402.22, stdev=258.20, samples=9 00:22:11.905 lat (msec) : 2=5.32%, 4=61.62%, 10=33.06% 00:22:11.905 cpu : usr=93.98%, sys=4.94%, ctx=217, majf=0, minf=0 00:22:11.905 IO depths : 1=1.1%, 2=6.4%, 4=61.0%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:11.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.905 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.905 issued rwts: total=11836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:11.905 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:11.905 filename1: (groupid=0, jobs=1): err= 0: pid=83489: Fri Dec 6 09:59:36 2024 00:22:11.905 read: IOPS=2358, BW=18.4MiB/s (19.3MB/s)(92.1MiB/5001msec) 00:22:11.905 slat (usec): min=4, max=110, avg=18.25, stdev= 9.98 00:22:11.905 clat (usec): min=905, max=6692, avg=3337.05, stdev=965.71 00:22:11.905 lat (usec): min=916, max=6731, avg=3355.30, stdev=965.43 00:22:11.905 clat percentiles (usec): 00:22:11.905 | 1.00th=[ 1450], 5.00th=[ 1909], 10.00th=[ 2073], 20.00th=[ 2245], 00:22:11.905 | 30.00th=[ 2442], 40.00th=[ 3228], 50.00th=[ 3556], 60.00th=[ 3916], 00:22:11.905 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 4424], 95.00th=[ 4686], 00:22:11.905 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 5800], 99.95th=[ 6128], 00:22:11.905 | 99.99th=[ 6521] 00:22:11.905 bw ( KiB/s): min=16224, max=20832, per=25.57%, avg=18942.22, stdev=1602.27, samples=9 00:22:11.905 iops : min= 2028, max= 2604, avg=2367.78, stdev=200.28, samples=9 00:22:11.905 lat (usec) : 1000=0.03% 00:22:11.905 lat (msec) : 2=6.92%, 4=60.18%, 10=32.87% 00:22:11.905 cpu : usr=94.02%, sys=4.90%, ctx=6, majf=0, minf=0 00:22:11.905 IO depths : 1=0.8%, 2=7.3%, 4=60.2%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:11.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.905 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.905 issued rwts: total=11794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:11.905 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:11.905 00:22:11.905 Run status group 0 (all jobs): 00:22:11.905 READ: bw=72.3MiB/s (75.9MB/s), 16.9MiB/s-18.5MiB/s (17.8MB/s-19.4MB/s), io=362MiB (379MB), run=5001-5002msec 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.905 00:22:11.905 real 0m23.734s 00:22:11.905 user 2m6.168s 00:22:11.905 sys 0m6.047s 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 ************************************ 00:22:11.905 END TEST fio_dif_rand_params 00:22:11.905 ************************************ 00:22:11.905 09:59:36 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:11.905 09:59:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:11.905 09:59:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 ************************************ 00:22:11.905 START TEST fio_dif_digest 00:22:11.905 ************************************ 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 bdev_null0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:11.905 [2024-12-06 09:59:36.572900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:11.905 { 00:22:11.905 "params": { 00:22:11.905 "name": "Nvme$subsystem", 00:22:11.905 "trtype": "$TEST_TRANSPORT", 00:22:11.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:11.905 "adrfam": "ipv4", 00:22:11.905 "trsvcid": "$NVMF_PORT", 00:22:11.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:11.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:11.905 "hdgst": ${hdgst:-false}, 00:22:11.905 "ddgst": ${ddgst:-false} 00:22:11.905 }, 00:22:11.905 "method": "bdev_nvme_attach_controller" 00:22:11.905 } 00:22:11.905 EOF 00:22:11.905 )") 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:11.905 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:11.906 "params": { 00:22:11.906 "name": "Nvme0", 00:22:11.906 "trtype": "tcp", 00:22:11.906 "traddr": "10.0.0.3", 00:22:11.906 "adrfam": "ipv4", 00:22:11.906 "trsvcid": "4420", 00:22:11.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:11.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:11.906 "hdgst": true, 00:22:11.906 "ddgst": true 00:22:11.906 }, 00:22:11.906 "method": "bdev_nvme_attach_controller" 00:22:11.906 }' 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:11.906 09:59:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:11.906 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:11.906 ... 00:22:11.906 fio-3.35 00:22:11.906 Starting 3 threads 00:22:24.160 00:22:24.160 filename0: (groupid=0, jobs=1): err= 0: pid=83599: Fri Dec 6 09:59:47 2024 00:22:24.160 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(309MiB/10010msec) 00:22:24.160 slat (nsec): min=6547, max=71970, avg=12886.09, stdev=7115.54 00:22:24.160 clat (usec): min=10972, max=21166, avg=12129.71, stdev=958.78 00:22:24.160 lat (usec): min=10980, max=21186, avg=12142.60, stdev=959.06 00:22:24.160 clat percentiles (usec): 00:22:24.160 | 1.00th=[11076], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:22:24.160 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[11994], 00:22:24.160 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12780], 95.00th=[13042], 00:22:24.160 | 99.00th=[17695], 99.50th=[17957], 99.90th=[21103], 99.95th=[21103], 00:22:24.160 | 99.99th=[21103] 00:22:24.160 bw ( KiB/s): min=28416, max=33024, per=33.33%, avg=31568.84, stdev=1141.85, samples=19 00:22:24.160 iops : min= 222, max= 258, avg=246.63, stdev= 8.92, samples=19 00:22:24.160 lat (msec) : 20=99.88%, 50=0.12% 00:22:24.160 cpu : usr=95.77%, sys=3.68%, ctx=24, majf=0, minf=0 00:22:24.160 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:24.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.160 issued rwts: total=2469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:24.160 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:24.160 filename0: (groupid=0, jobs=1): err= 0: pid=83600: Fri Dec 6 09:59:47 2024 00:22:24.160 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(309MiB/10007msec) 00:22:24.160 slat (nsec): min=6319, max=61037, avg=11453.70, stdev=6377.80 00:22:24.160 clat (usec): min=9385, max=21927, avg=12129.30, stdev=990.23 00:22:24.160 lat (usec): min=9392, max=21945, avg=12140.75, stdev=990.34 00:22:24.160 clat percentiles (usec): 00:22:24.160 | 1.00th=[11076], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:22:24.160 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[11994], 00:22:24.160 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:22:24.160 | 99.00th=[17695], 99.50th=[17957], 99.90th=[21890], 99.95th=[21890], 00:22:24.160 | 99.99th=[21890] 00:22:24.160 bw ( KiB/s): min=28416, max=33024, per=33.33%, avg=31568.84, stdev=1141.85, samples=19 00:22:24.160 iops : min= 222, max= 258, avg=246.63, stdev= 8.92, samples=19 00:22:24.160 lat (msec) : 10=0.12%, 20=99.64%, 50=0.24% 00:22:24.160 cpu : usr=94.95%, sys=4.52%, ctx=10, majf=0, minf=0 00:22:24.160 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:24.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.160 issued rwts: total=2469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:24.160 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:24.160 filename0: (groupid=0, jobs=1): err= 0: pid=83601: Fri Dec 6 09:59:47 2024 00:22:24.160 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(309MiB/10007msec) 00:22:24.160 slat (nsec): min=6295, max=57216, avg=9456.90, stdev=4090.39 00:22:24.160 clat (usec): min=7858, max=22824, avg=12135.15, stdev=996.51 00:22:24.160 lat (usec): min=7865, max=22835, avg=12144.61, stdev=996.61 00:22:24.160 clat percentiles (usec): 00:22:24.160 | 1.00th=[11076], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:22:24.160 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:22:24.160 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12780], 95.00th=[13042], 00:22:24.160 | 99.00th=[17695], 99.50th=[17957], 99.90th=[22938], 99.95th=[22938], 00:22:24.160 | 99.99th=[22938] 00:22:24.160 bw ( KiB/s): min=28416, max=33024, per=33.33%, avg=31568.84, stdev=1112.78, samples=19 00:22:24.160 iops : min= 222, max= 258, avg=246.63, stdev= 8.69, samples=19 00:22:24.160 lat (msec) : 10=0.12%, 20=99.64%, 50=0.24% 00:22:24.160 cpu : usr=95.71%, sys=3.75%, ctx=20, majf=0, minf=0 00:22:24.160 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:24.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.160 issued rwts: total=2469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:24.160 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:24.160 00:22:24.160 Run status group 0 (all jobs): 00:22:24.160 READ: bw=92.5MiB/s (97.0MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=926MiB (971MB), run=10007-10010msec 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.160 00:22:24.160 real 0m11.060s 00:22:24.160 user 0m29.375s 00:22:24.160 sys 0m1.474s 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.160 09:59:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:24.160 ************************************ 00:22:24.160 END TEST fio_dif_digest 00:22:24.160 ************************************ 00:22:24.160 09:59:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:24.160 09:59:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.160 rmmod nvme_tcp 00:22:24.160 rmmod nvme_fabrics 00:22:24.160 rmmod nvme_keyring 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82847 ']' 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82847 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82847 ']' 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82847 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82847 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.160 killing process with pid 82847 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82847' 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82847 00:22:24.160 09:59:47 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82847 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:24.160 09:59:47 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:24.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:24.160 Waiting for block devices as requested 00:22:24.160 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:24.160 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.160 09:59:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:24.160 09:59:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.160 09:59:48 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:22:24.160 00:22:24.160 real 0m59.815s 00:22:24.160 user 3m50.547s 00:22:24.160 sys 0m16.961s 00:22:24.160 09:59:48 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.160 ************************************ 00:22:24.160 END TEST nvmf_dif 00:22:24.160 ************************************ 00:22:24.161 09:59:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:24.161 09:59:48 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:24.161 09:59:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:24.161 09:59:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.161 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:22:24.161 ************************************ 00:22:24.161 START TEST nvmf_abort_qd_sizes 00:22:24.161 ************************************ 00:22:24.161 09:59:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:24.161 * Looking for test storage... 00:22:24.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:24.161 09:59:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:24.161 09:59:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:22:24.161 09:59:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.161 --rc genhtml_branch_coverage=1 00:22:24.161 --rc genhtml_function_coverage=1 00:22:24.161 --rc genhtml_legend=1 00:22:24.161 --rc geninfo_all_blocks=1 00:22:24.161 --rc geninfo_unexecuted_blocks=1 00:22:24.161 00:22:24.161 ' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.161 --rc genhtml_branch_coverage=1 00:22:24.161 --rc genhtml_function_coverage=1 00:22:24.161 --rc genhtml_legend=1 00:22:24.161 --rc geninfo_all_blocks=1 00:22:24.161 --rc geninfo_unexecuted_blocks=1 00:22:24.161 00:22:24.161 ' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.161 --rc genhtml_branch_coverage=1 00:22:24.161 --rc genhtml_function_coverage=1 00:22:24.161 --rc genhtml_legend=1 00:22:24.161 --rc geninfo_all_blocks=1 00:22:24.161 --rc geninfo_unexecuted_blocks=1 00:22:24.161 00:22:24.161 ' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.161 --rc genhtml_branch_coverage=1 00:22:24.161 --rc genhtml_function_coverage=1 00:22:24.161 --rc genhtml_legend=1 00:22:24.161 --rc geninfo_all_blocks=1 00:22:24.161 --rc geninfo_unexecuted_blocks=1 00:22:24.161 00:22:24.161 ' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.161 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:24.161 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:24.162 Cannot find device "nvmf_init_br" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:24.162 Cannot find device "nvmf_init_br2" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:24.162 Cannot find device "nvmf_tgt_br" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:24.162 Cannot find device "nvmf_tgt_br2" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:24.162 Cannot find device "nvmf_init_br" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:24.162 Cannot find device "nvmf_init_br2" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:24.162 Cannot find device "nvmf_tgt_br" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:24.162 Cannot find device "nvmf_tgt_br2" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:24.162 Cannot find device "nvmf_br" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:24.162 Cannot find device "nvmf_init_if" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:24.162 Cannot find device "nvmf_init_if2" 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:24.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:24.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:24.162 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:24.420 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:24.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:24.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:22:24.420 00:22:24.420 --- 10.0.0.3 ping statistics --- 00:22:24.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.420 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:24.420 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:24.420 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:24.420 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:22:24.420 00:22:24.420 --- 10.0.0.4 ping statistics --- 00:22:24.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.420 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:24.420 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:24.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:24.420 00:22:24.420 --- 10.0.0.1 ping statistics --- 00:22:24.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.420 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:24.420 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:24.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:22:24.420 00:22:24.420 --- 10.0.0.2 ping statistics --- 00:22:24.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.420 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:24.420 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.420 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:22:24.420 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:24.420 09:59:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:24.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:24.986 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:24.986 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84247 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84247 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84247 ']' 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.244 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:25.244 [2024-12-06 09:59:50.378409] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:22:25.244 [2024-12-06 09:59:50.378505] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.503 [2024-12-06 09:59:50.532482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.503 [2024-12-06 09:59:50.598705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.503 [2024-12-06 09:59:50.598763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.503 [2024-12-06 09:59:50.598777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.503 [2024-12-06 09:59:50.598788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.503 [2024-12-06 09:59:50.598797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.503 [2024-12-06 09:59:50.600080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.503 [2024-12-06 09:59:50.600219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.503 [2024-12-06 09:59:50.600328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.503 [2024-12-06 09:59:50.600331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.503 [2024-12-06 09:59:50.658696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:22:25.503 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:22:25.763 09:59:50 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:25.764 09:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:25.764 09:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:25.764 09:59:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:25.764 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.764 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.764 09:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:25.764 ************************************ 00:22:25.764 START TEST spdk_target_abort 00:22:25.764 ************************************ 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.764 spdk_targetn1 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.764 [2024-12-06 09:59:50.882324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.764 [2024-12-06 09:59:50.919041] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:25.764 09:59:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:29.052 Initializing NVMe Controllers 00:22:29.052 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:29.052 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:29.052 Initialization complete. Launching workers. 00:22:29.052 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9507, failed: 0 00:22:29.052 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1058, failed to submit 8449 00:22:29.052 success 903, unsuccessful 155, failed 0 00:22:29.052 09:59:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:29.052 09:59:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:32.341 Initializing NVMe Controllers 00:22:32.341 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:32.341 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:32.341 Initialization complete. Launching workers. 00:22:32.341 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8910, failed: 0 00:22:32.341 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1147, failed to submit 7763 00:22:32.341 success 433, unsuccessful 714, failed 0 00:22:32.341 09:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:32.342 09:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:35.659 Initializing NVMe Controllers 00:22:35.659 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:35.659 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:35.659 Initialization complete. Launching workers. 00:22:35.659 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29982, failed: 0 00:22:35.659 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2254, failed to submit 27728 00:22:35.659 success 377, unsuccessful 1877, failed 0 00:22:35.659 10:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:35.660 10:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.660 10:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:35.660 10:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.660 10:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:35.660 10:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.660 10:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84247 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84247 ']' 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84247 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84247 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84247' 00:22:36.226 killing process with pid 84247 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84247 00:22:36.226 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84247 00:22:36.483 00:22:36.483 real 0m10.761s 00:22:36.483 user 0m41.461s 00:22:36.483 sys 0m1.944s 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:36.483 ************************************ 00:22:36.483 END TEST spdk_target_abort 00:22:36.483 ************************************ 00:22:36.483 10:00:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:36.483 10:00:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:36.483 10:00:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.483 10:00:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:36.483 ************************************ 00:22:36.483 START TEST kernel_target_abort 00:22:36.483 ************************************ 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:36.483 10:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:36.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:36.998 Waiting for block devices as requested 00:22:36.998 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:36.998 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:36.998 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:37.256 No valid GPT data, bailing 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:37.256 No valid GPT data, bailing 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:37.256 No valid GPT data, bailing 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:37.256 No valid GPT data, bailing 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:37.256 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 --hostid=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 -a 10.0.0.1 -t tcp -s 4420 00:22:37.569 00:22:37.569 Discovery Log Number of Records 2, Generation counter 2 00:22:37.569 =====Discovery Log Entry 0====== 00:22:37.569 trtype: tcp 00:22:37.569 adrfam: ipv4 00:22:37.569 subtype: current discovery subsystem 00:22:37.569 treq: not specified, sq flow control disable supported 00:22:37.569 portid: 1 00:22:37.569 trsvcid: 4420 00:22:37.569 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:37.569 traddr: 10.0.0.1 00:22:37.569 eflags: none 00:22:37.569 sectype: none 00:22:37.569 =====Discovery Log Entry 1====== 00:22:37.569 trtype: tcp 00:22:37.569 adrfam: ipv4 00:22:37.569 subtype: nvme subsystem 00:22:37.569 treq: not specified, sq flow control disable supported 00:22:37.569 portid: 1 00:22:37.569 trsvcid: 4420 00:22:37.569 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:37.569 traddr: 10.0.0.1 00:22:37.569 eflags: none 00:22:37.569 sectype: none 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.569 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:37.570 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.570 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:37.570 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.570 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:37.570 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:37.570 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:37.570 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:37.570 10:00:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:40.850 Initializing NVMe Controllers 00:22:40.850 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:40.850 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:40.850 Initialization complete. Launching workers. 00:22:40.850 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35804, failed: 0 00:22:40.850 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35804, failed to submit 0 00:22:40.850 success 0, unsuccessful 35804, failed 0 00:22:40.850 10:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:40.850 10:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:44.137 Initializing NVMe Controllers 00:22:44.137 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:44.137 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:44.137 Initialization complete. Launching workers. 00:22:44.137 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72353, failed: 0 00:22:44.137 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31208, failed to submit 41145 00:22:44.137 success 0, unsuccessful 31208, failed 0 00:22:44.137 10:00:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:44.137 10:00:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:47.427 Initializing NVMe Controllers 00:22:47.427 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:47.427 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:47.427 Initialization complete. Launching workers. 00:22:47.427 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96536, failed: 0 00:22:47.427 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24243, failed to submit 72293 00:22:47.427 success 0, unsuccessful 24243, failed 0 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:47.427 10:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:47.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:50.972 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:50.972 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:50.972 ************************************ 00:22:50.972 END TEST kernel_target_abort 00:22:50.972 ************************************ 00:22:50.972 00:22:50.972 real 0m14.177s 00:22:50.972 user 0m6.200s 00:22:50.972 sys 0m5.281s 00:22:50.972 10:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.972 10:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.972 rmmod nvme_tcp 00:22:50.972 rmmod nvme_fabrics 00:22:50.972 rmmod nvme_keyring 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84247 ']' 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84247 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84247 ']' 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84247 00:22:50.972 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84247) - No such process 00:22:50.972 Process with pid 84247 is not found 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84247 is not found' 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:50.972 10:00:15 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:51.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:51.231 Waiting for block devices as requested 00:22:51.231 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:51.231 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:51.490 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:51.491 10:00:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.779 10:00:16 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:22:51.779 00:22:51.779 real 0m27.938s 00:22:51.779 user 0m48.830s 00:22:51.779 sys 0m8.630s 00:22:51.779 10:00:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.779 ************************************ 00:22:51.779 END TEST nvmf_abort_qd_sizes 00:22:51.779 ************************************ 00:22:51.779 10:00:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:51.779 10:00:16 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:51.779 10:00:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:51.779 10:00:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.779 10:00:16 -- common/autotest_common.sh@10 -- # set +x 00:22:51.779 ************************************ 00:22:51.779 START TEST keyring_file 00:22:51.779 ************************************ 00:22:51.779 10:00:16 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:51.779 * Looking for test storage... 00:22:51.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:51.779 10:00:16 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:51.779 10:00:16 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:22:51.779 10:00:16 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:51.779 10:00:17 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@345 -- # : 1 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@353 -- # local d=1 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@355 -- # echo 1 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@353 -- # local d=2 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.779 10:00:17 keyring_file -- scripts/common.sh@355 -- # echo 2 00:22:52.042 10:00:17 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.042 10:00:17 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.042 10:00:17 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.042 10:00:17 keyring_file -- scripts/common.sh@368 -- # return 0 00:22:52.042 10:00:17 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.042 10:00:17 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.042 --rc genhtml_branch_coverage=1 00:22:52.042 --rc genhtml_function_coverage=1 00:22:52.042 --rc genhtml_legend=1 00:22:52.042 --rc geninfo_all_blocks=1 00:22:52.042 --rc geninfo_unexecuted_blocks=1 00:22:52.042 00:22:52.042 ' 00:22:52.042 10:00:17 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.042 --rc genhtml_branch_coverage=1 00:22:52.042 --rc genhtml_function_coverage=1 00:22:52.042 --rc genhtml_legend=1 00:22:52.042 --rc geninfo_all_blocks=1 00:22:52.042 --rc geninfo_unexecuted_blocks=1 00:22:52.042 00:22:52.042 ' 00:22:52.042 10:00:17 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.042 --rc genhtml_branch_coverage=1 00:22:52.042 --rc genhtml_function_coverage=1 00:22:52.042 --rc genhtml_legend=1 00:22:52.042 --rc geninfo_all_blocks=1 00:22:52.042 --rc geninfo_unexecuted_blocks=1 00:22:52.042 00:22:52.042 ' 00:22:52.042 10:00:17 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.042 --rc genhtml_branch_coverage=1 00:22:52.042 --rc genhtml_function_coverage=1 00:22:52.042 --rc genhtml_legend=1 00:22:52.042 --rc geninfo_all_blocks=1 00:22:52.042 --rc geninfo_unexecuted_blocks=1 00:22:52.042 00:22:52.042 ' 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.042 10:00:17 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.042 10:00:17 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.042 10:00:17 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.042 10:00:17 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.042 10:00:17 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.042 10:00:17 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.042 10:00:17 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.042 10:00:17 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:52.042 10:00:17 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@51 -- # : 0 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.042 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.n68EDAqr4h 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:52.042 10:00:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.n68EDAqr4h 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.n68EDAqr4h 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.n68EDAqr4h 00:22:52.042 10:00:17 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:52.042 10:00:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:52.043 10:00:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ejnEOWMR4Y 00:22:52.043 10:00:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:52.043 10:00:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:52.043 10:00:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:52.043 10:00:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:52.043 10:00:17 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:52.043 10:00:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:52.043 10:00:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:52.043 10:00:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ejnEOWMR4Y 00:22:52.043 10:00:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ejnEOWMR4Y 00:22:52.043 10:00:17 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ejnEOWMR4Y 00:22:52.043 10:00:17 keyring_file -- keyring/file.sh@30 -- # tgtpid=85157 00:22:52.043 10:00:17 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.043 10:00:17 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85157 00:22:52.043 10:00:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85157 ']' 00:22:52.043 10:00:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.043 10:00:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.043 10:00:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.043 10:00:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.043 10:00:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:52.043 [2024-12-06 10:00:17.251221] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:22:52.043 [2024-12-06 10:00:17.251737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85157 ] 00:22:52.302 [2024-12-06 10:00:17.403663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.302 [2024-12-06 10:00:17.461497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.302 [2024-12-06 10:00:17.545556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:52.561 10:00:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.561 10:00:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:52.561 10:00:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:52.561 10:00:17 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.561 10:00:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:52.561 [2024-12-06 10:00:17.788476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.561 null0 00:22:52.561 [2024-12-06 10:00:17.820444] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.561 [2024-12-06 10:00:17.820874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.819 10:00:17 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:52.819 [2024-12-06 10:00:17.852391] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:52.819 request: 00:22:52.819 { 00:22:52.819 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:52.819 "secure_channel": false, 00:22:52.819 "listen_address": { 00:22:52.819 "trtype": "tcp", 00:22:52.819 "traddr": "127.0.0.1", 00:22:52.819 "trsvcid": "4420" 00:22:52.819 }, 00:22:52.819 "method": "nvmf_subsystem_add_listener", 00:22:52.819 "req_id": 1 00:22:52.819 } 00:22:52.819 Got JSON-RPC error response 00:22:52.819 response: 00:22:52.819 { 00:22:52.819 "code": -32602, 00:22:52.819 "message": "Invalid parameters" 00:22:52.819 } 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.819 10:00:17 keyring_file -- keyring/file.sh@47 -- # bperfpid=85167 00:22:52.819 10:00:17 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85167 /var/tmp/bperf.sock 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85167 ']' 00:22:52.819 10:00:17 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:52.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.819 10:00:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:52.819 [2024-12-06 10:00:17.920800] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:22:52.819 [2024-12-06 10:00:17.920884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85167 ] 00:22:52.819 [2024-12-06 10:00:18.074377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.077 [2024-12-06 10:00:18.130876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.077 [2024-12-06 10:00:18.189646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:53.077 10:00:18 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.077 10:00:18 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:53.077 10:00:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n68EDAqr4h 00:22:53.077 10:00:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n68EDAqr4h 00:22:53.335 10:00:18 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ejnEOWMR4Y 00:22:53.335 10:00:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ejnEOWMR4Y 00:22:53.594 10:00:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:53.594 10:00:18 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:53.594 10:00:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:53.594 10:00:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:53.594 10:00:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:53.852 10:00:19 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.n68EDAqr4h == \/\t\m\p\/\t\m\p\.\n\6\8\E\D\A\q\r\4\h ]] 00:22:53.852 10:00:19 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:53.852 10:00:19 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:53.852 10:00:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:53.852 10:00:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:53.852 10:00:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:54.419 10:00:19 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ejnEOWMR4Y == \/\t\m\p\/\t\m\p\.\e\j\n\E\O\W\M\R\4\Y ]] 00:22:54.419 10:00:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:54.419 10:00:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:54.419 10:00:19 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:54.419 10:00:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:54.678 10:00:19 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:54.678 10:00:19 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:54.678 10:00:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:54.937 [2024-12-06 10:00:20.151078] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.196 nvme0n1 00:22:55.196 10:00:20 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:55.196 10:00:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:55.196 10:00:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:55.196 10:00:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:55.196 10:00:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:55.196 10:00:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:55.196 10:00:20 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:55.455 10:00:20 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:55.455 10:00:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:55.455 10:00:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:55.455 10:00:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:55.455 10:00:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:55.455 10:00:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:55.455 10:00:20 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:22:55.455 10:00:20 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:55.715 Running I/O for 1 seconds... 00:22:56.652 12109.00 IOPS, 47.30 MiB/s 00:22:56.652 Latency(us) 00:22:56.652 [2024-12-06T10:00:21.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.652 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:56.652 nvme0n1 : 1.01 12156.95 47.49 0.00 0.00 10496.72 4766.25 19184.17 00:22:56.652 [2024-12-06T10:00:21.924Z] =================================================================================================================== 00:22:56.652 [2024-12-06T10:00:21.924Z] Total : 12156.95 47.49 0.00 0.00 10496.72 4766.25 19184.17 00:22:56.652 { 00:22:56.652 "results": [ 00:22:56.652 { 00:22:56.652 "job": "nvme0n1", 00:22:56.652 "core_mask": "0x2", 00:22:56.652 "workload": "randrw", 00:22:56.652 "percentage": 50, 00:22:56.652 "status": "finished", 00:22:56.652 "queue_depth": 128, 00:22:56.652 "io_size": 4096, 00:22:56.652 "runtime": 1.006667, 00:22:56.652 "iops": 12156.9496169041, 00:22:56.652 "mibps": 47.48808444103164, 00:22:56.652 "io_failed": 0, 00:22:56.652 "io_timeout": 0, 00:22:56.652 "avg_latency_us": 10496.722180986199, 00:22:56.652 "min_latency_us": 4766.254545454545, 00:22:56.652 "max_latency_us": 19184.174545454545 00:22:56.652 } 00:22:56.652 ], 00:22:56.652 "core_count": 1 00:22:56.652 } 00:22:56.652 10:00:21 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:56.652 10:00:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:56.910 10:00:22 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:22:56.910 10:00:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:56.910 10:00:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:56.910 10:00:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:56.910 10:00:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:56.911 10:00:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:57.168 10:00:22 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:57.168 10:00:22 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:22:57.168 10:00:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:57.168 10:00:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:57.168 10:00:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:57.168 10:00:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:57.168 10:00:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:57.425 10:00:22 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:22:57.425 10:00:22 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:57.425 10:00:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:57.425 10:00:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:57.425 10:00:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:57.425 10:00:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.425 10:00:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:57.425 10:00:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.425 10:00:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:57.425 10:00:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:57.684 [2024-12-06 10:00:22.855055] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:57.684 [2024-12-06 10:00:22.855645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18615d0 (107): Transport endpoint is not connected 00:22:57.684 [2024-12-06 10:00:22.856609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18615d0 (9): Bad file descriptor 00:22:57.684 [2024-12-06 10:00:22.857606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:57.684 [2024-12-06 10:00:22.857636] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:57.684 [2024-12-06 10:00:22.857649] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:57.684 [2024-12-06 10:00:22.857661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:57.684 request: 00:22:57.684 { 00:22:57.684 "name": "nvme0", 00:22:57.684 "trtype": "tcp", 00:22:57.684 "traddr": "127.0.0.1", 00:22:57.684 "adrfam": "ipv4", 00:22:57.684 "trsvcid": "4420", 00:22:57.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:57.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:57.684 "prchk_reftag": false, 00:22:57.684 "prchk_guard": false, 00:22:57.684 "hdgst": false, 00:22:57.684 "ddgst": false, 00:22:57.684 "psk": "key1", 00:22:57.684 "allow_unrecognized_csi": false, 00:22:57.684 "method": "bdev_nvme_attach_controller", 00:22:57.684 "req_id": 1 00:22:57.684 } 00:22:57.684 Got JSON-RPC error response 00:22:57.684 response: 00:22:57.684 { 00:22:57.684 "code": -5, 00:22:57.684 "message": "Input/output error" 00:22:57.684 } 00:22:57.684 10:00:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:57.684 10:00:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.684 10:00:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.684 10:00:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.684 10:00:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:22:57.684 10:00:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:57.684 10:00:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:57.684 10:00:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:57.684 10:00:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:57.685 10:00:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:57.943 10:00:23 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:57.943 10:00:23 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:22:57.943 10:00:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:57.943 10:00:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:57.943 10:00:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:57.943 10:00:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:57.943 10:00:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:58.201 10:00:23 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:22:58.201 10:00:23 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:22:58.201 10:00:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:58.459 10:00:23 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:22:58.459 10:00:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:58.717 10:00:23 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:22:58.717 10:00:23 keyring_file -- keyring/file.sh@78 -- # jq length 00:22:58.717 10:00:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:58.974 10:00:24 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:22:58.974 10:00:24 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.n68EDAqr4h 00:22:58.974 10:00:24 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.n68EDAqr4h 00:22:58.974 10:00:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:58.974 10:00:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.n68EDAqr4h 00:22:58.974 10:00:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:58.974 10:00:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.974 10:00:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:58.974 10:00:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.974 10:00:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n68EDAqr4h 00:22:58.974 10:00:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n68EDAqr4h 00:22:59.231 [2024-12-06 10:00:24.404611] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n68EDAqr4h': 0100660 00:22:59.231 [2024-12-06 10:00:24.404680] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:59.231 request: 00:22:59.231 { 00:22:59.231 "name": "key0", 00:22:59.231 "path": "/tmp/tmp.n68EDAqr4h", 00:22:59.231 "method": "keyring_file_add_key", 00:22:59.231 "req_id": 1 00:22:59.231 } 00:22:59.231 Got JSON-RPC error response 00:22:59.231 response: 00:22:59.231 { 00:22:59.231 "code": -1, 00:22:59.231 "message": "Operation not permitted" 00:22:59.231 } 00:22:59.231 10:00:24 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:59.231 10:00:24 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.231 10:00:24 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.231 10:00:24 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.231 10:00:24 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.n68EDAqr4h 00:22:59.231 10:00:24 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n68EDAqr4h 00:22:59.231 10:00:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n68EDAqr4h 00:22:59.489 10:00:24 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.n68EDAqr4h 00:22:59.489 10:00:24 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:22:59.489 10:00:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:59.489 10:00:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:59.489 10:00:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:59.489 10:00:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:59.489 10:00:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:59.746 10:00:24 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:22:59.746 10:00:24 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.746 10:00:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:59.746 10:00:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.746 10:00:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:59.746 10:00:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.746 10:00:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:59.746 10:00:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.746 10:00:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:59.746 10:00:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:00.003 [2024-12-06 10:00:25.128821] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.n68EDAqr4h': No such file or directory 00:23:00.003 [2024-12-06 10:00:25.128878] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:00.003 [2024-12-06 10:00:25.128900] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:00.003 [2024-12-06 10:00:25.128911] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:23:00.003 [2024-12-06 10:00:25.128921] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:00.003 [2024-12-06 10:00:25.128930] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:00.003 request: 00:23:00.003 { 00:23:00.003 "name": "nvme0", 00:23:00.003 "trtype": "tcp", 00:23:00.003 "traddr": "127.0.0.1", 00:23:00.003 "adrfam": "ipv4", 00:23:00.003 "trsvcid": "4420", 00:23:00.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:00.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:00.003 "prchk_reftag": false, 00:23:00.003 "prchk_guard": false, 00:23:00.003 "hdgst": false, 00:23:00.003 "ddgst": false, 00:23:00.003 "psk": "key0", 00:23:00.003 "allow_unrecognized_csi": false, 00:23:00.003 "method": "bdev_nvme_attach_controller", 00:23:00.003 "req_id": 1 00:23:00.003 } 00:23:00.003 Got JSON-RPC error response 00:23:00.003 response: 00:23:00.003 { 00:23:00.003 "code": -19, 00:23:00.003 "message": "No such device" 00:23:00.003 } 00:23:00.003 10:00:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:00.003 10:00:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.003 10:00:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.003 10:00:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.003 10:00:25 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:23:00.003 10:00:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:00.260 10:00:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6a79bI44kZ 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:00.260 10:00:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:00.260 10:00:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:00.260 10:00:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:00.260 10:00:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:00.260 10:00:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:00.260 10:00:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6a79bI44kZ 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6a79bI44kZ 00:23:00.260 10:00:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.6a79bI44kZ 00:23:00.260 10:00:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6a79bI44kZ 00:23:00.260 10:00:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6a79bI44kZ 00:23:00.517 10:00:25 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:00.517 10:00:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:00.775 nvme0n1 00:23:00.775 10:00:25 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:23:00.775 10:00:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:00.775 10:00:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:00.775 10:00:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:00.775 10:00:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:00.775 10:00:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:01.032 10:00:26 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:23:01.032 10:00:26 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:23:01.032 10:00:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:01.290 10:00:26 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:23:01.290 10:00:26 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:23:01.290 10:00:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:01.290 10:00:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:01.290 10:00:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:01.558 10:00:26 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:23:01.558 10:00:26 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:23:01.558 10:00:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:01.558 10:00:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:01.558 10:00:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:01.558 10:00:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:01.558 10:00:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:01.834 10:00:27 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:23:01.834 10:00:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:01.834 10:00:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:02.105 10:00:27 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:23:02.105 10:00:27 keyring_file -- keyring/file.sh@105 -- # jq length 00:23:02.105 10:00:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:02.364 10:00:27 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:23:02.364 10:00:27 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6a79bI44kZ 00:23:02.364 10:00:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6a79bI44kZ 00:23:02.624 10:00:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ejnEOWMR4Y 00:23:02.624 10:00:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ejnEOWMR4Y 00:23:02.883 10:00:27 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:02.883 10:00:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:03.143 nvme0n1 00:23:03.143 10:00:28 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:23:03.143 10:00:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:03.713 10:00:28 keyring_file -- keyring/file.sh@113 -- # config='{ 00:23:03.713 "subsystems": [ 00:23:03.713 { 00:23:03.713 "subsystem": "keyring", 00:23:03.713 "config": [ 00:23:03.713 { 00:23:03.713 "method": "keyring_file_add_key", 00:23:03.713 "params": { 00:23:03.713 "name": "key0", 00:23:03.713 "path": "/tmp/tmp.6a79bI44kZ" 00:23:03.713 } 00:23:03.713 }, 00:23:03.713 { 00:23:03.713 "method": "keyring_file_add_key", 00:23:03.713 "params": { 00:23:03.713 "name": "key1", 00:23:03.713 "path": "/tmp/tmp.ejnEOWMR4Y" 00:23:03.713 } 00:23:03.713 } 00:23:03.713 ] 00:23:03.713 }, 00:23:03.713 { 00:23:03.713 "subsystem": "iobuf", 00:23:03.713 "config": [ 00:23:03.713 { 00:23:03.713 "method": "iobuf_set_options", 00:23:03.713 "params": { 00:23:03.713 "small_pool_count": 8192, 00:23:03.713 "large_pool_count": 1024, 00:23:03.713 "small_bufsize": 8192, 00:23:03.713 "large_bufsize": 135168, 00:23:03.713 "enable_numa": false 00:23:03.713 } 00:23:03.713 } 00:23:03.713 ] 00:23:03.713 }, 00:23:03.713 { 00:23:03.713 "subsystem": "sock", 00:23:03.713 "config": [ 00:23:03.713 { 00:23:03.713 "method": "sock_set_default_impl", 00:23:03.713 "params": { 00:23:03.713 "impl_name": "uring" 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "sock_impl_set_options", 00:23:03.714 "params": { 00:23:03.714 "impl_name": "ssl", 00:23:03.714 "recv_buf_size": 4096, 00:23:03.714 "send_buf_size": 4096, 00:23:03.714 "enable_recv_pipe": true, 00:23:03.714 "enable_quickack": false, 00:23:03.714 "enable_placement_id": 0, 00:23:03.714 "enable_zerocopy_send_server": true, 00:23:03.714 "enable_zerocopy_send_client": false, 00:23:03.714 "zerocopy_threshold": 0, 00:23:03.714 "tls_version": 0, 00:23:03.714 "enable_ktls": false 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "sock_impl_set_options", 00:23:03.714 "params": { 00:23:03.714 "impl_name": "posix", 00:23:03.714 "recv_buf_size": 2097152, 00:23:03.714 "send_buf_size": 2097152, 00:23:03.714 "enable_recv_pipe": true, 00:23:03.714 "enable_quickack": false, 00:23:03.714 "enable_placement_id": 0, 00:23:03.714 "enable_zerocopy_send_server": true, 00:23:03.714 "enable_zerocopy_send_client": false, 00:23:03.714 "zerocopy_threshold": 0, 00:23:03.714 "tls_version": 0, 00:23:03.714 "enable_ktls": false 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "sock_impl_set_options", 00:23:03.714 "params": { 00:23:03.714 "impl_name": "uring", 00:23:03.714 "recv_buf_size": 2097152, 00:23:03.714 "send_buf_size": 2097152, 00:23:03.714 "enable_recv_pipe": true, 00:23:03.714 "enable_quickack": false, 00:23:03.714 "enable_placement_id": 0, 00:23:03.714 "enable_zerocopy_send_server": false, 00:23:03.714 "enable_zerocopy_send_client": false, 00:23:03.714 "zerocopy_threshold": 0, 00:23:03.714 "tls_version": 0, 00:23:03.714 "enable_ktls": false 00:23:03.714 } 00:23:03.714 } 00:23:03.714 ] 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "subsystem": "vmd", 00:23:03.714 "config": [] 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "subsystem": "accel", 00:23:03.714 "config": [ 00:23:03.714 { 00:23:03.714 "method": "accel_set_options", 00:23:03.714 "params": { 00:23:03.714 "small_cache_size": 128, 00:23:03.714 "large_cache_size": 16, 00:23:03.714 "task_count": 2048, 00:23:03.714 "sequence_count": 2048, 00:23:03.714 "buf_count": 2048 00:23:03.714 } 00:23:03.714 } 00:23:03.714 ] 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "subsystem": "bdev", 00:23:03.714 "config": [ 00:23:03.714 { 00:23:03.714 "method": "bdev_set_options", 00:23:03.714 "params": { 00:23:03.714 "bdev_io_pool_size": 65535, 00:23:03.714 "bdev_io_cache_size": 256, 00:23:03.714 "bdev_auto_examine": true, 00:23:03.714 "iobuf_small_cache_size": 128, 00:23:03.714 "iobuf_large_cache_size": 16 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "bdev_raid_set_options", 00:23:03.714 "params": { 00:23:03.714 "process_window_size_kb": 1024, 00:23:03.714 "process_max_bandwidth_mb_sec": 0 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "bdev_iscsi_set_options", 00:23:03.714 "params": { 00:23:03.714 "timeout_sec": 30 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "bdev_nvme_set_options", 00:23:03.714 "params": { 00:23:03.714 "action_on_timeout": "none", 00:23:03.714 "timeout_us": 0, 00:23:03.714 "timeout_admin_us": 0, 00:23:03.714 "keep_alive_timeout_ms": 10000, 00:23:03.714 "arbitration_burst": 0, 00:23:03.714 "low_priority_weight": 0, 00:23:03.714 "medium_priority_weight": 0, 00:23:03.714 "high_priority_weight": 0, 00:23:03.714 "nvme_adminq_poll_period_us": 10000, 00:23:03.714 "nvme_ioq_poll_period_us": 0, 00:23:03.714 "io_queue_requests": 512, 00:23:03.714 "delay_cmd_submit": true, 00:23:03.714 "transport_retry_count": 4, 00:23:03.714 "bdev_retry_count": 3, 00:23:03.714 "transport_ack_timeout": 0, 00:23:03.714 "ctrlr_loss_timeout_sec": 0, 00:23:03.714 "reconnect_delay_sec": 0, 00:23:03.714 "fast_io_fail_timeout_sec": 0, 00:23:03.714 "disable_auto_failback": false, 00:23:03.714 "generate_uuids": false, 00:23:03.714 "transport_tos": 0, 00:23:03.714 "nvme_error_stat": false, 00:23:03.714 "rdma_srq_size": 0, 00:23:03.714 "io_path_stat": false, 00:23:03.714 "allow_accel_sequence": false, 00:23:03.714 "rdma_max_cq_size": 0, 00:23:03.714 "rdma_cm_event_timeout_ms": 0, 00:23:03.714 "dhchap_digests": [ 00:23:03.714 "sha256", 00:23:03.714 "sha384", 00:23:03.714 "sha512" 00:23:03.714 ], 00:23:03.714 "dhchap_dhgroups": [ 00:23:03.714 "null", 00:23:03.714 "ffdhe2048", 00:23:03.714 "ffdhe3072", 00:23:03.714 "ffdhe4096", 00:23:03.714 "ffdhe6144", 00:23:03.714 "ffdhe8192" 00:23:03.714 ] 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "bdev_nvme_attach_controller", 00:23:03.714 "params": { 00:23:03.714 "name": "nvme0", 00:23:03.714 "trtype": "TCP", 00:23:03.714 "adrfam": "IPv4", 00:23:03.714 "traddr": "127.0.0.1", 00:23:03.714 "trsvcid": "4420", 00:23:03.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.714 "prchk_reftag": false, 00:23:03.714 "prchk_guard": false, 00:23:03.714 "ctrlr_loss_timeout_sec": 0, 00:23:03.714 "reconnect_delay_sec": 0, 00:23:03.714 "fast_io_fail_timeout_sec": 0, 00:23:03.714 "psk": "key0", 00:23:03.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:03.714 "hdgst": false, 00:23:03.714 "ddgst": false, 00:23:03.714 "multipath": "multipath" 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "bdev_nvme_set_hotplug", 00:23:03.714 "params": { 00:23:03.714 "period_us": 100000, 00:23:03.714 "enable": false 00:23:03.714 } 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "method": "bdev_wait_for_examine" 00:23:03.714 } 00:23:03.714 ] 00:23:03.714 }, 00:23:03.714 { 00:23:03.714 "subsystem": "nbd", 00:23:03.714 "config": [] 00:23:03.714 } 00:23:03.714 ] 00:23:03.714 }' 00:23:03.714 10:00:28 keyring_file -- keyring/file.sh@115 -- # killprocess 85167 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85167 ']' 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85167 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85167 00:23:03.714 killing process with pid 85167 00:23:03.714 Received shutdown signal, test time was about 1.000000 seconds 00:23:03.714 00:23:03.714 Latency(us) 00:23:03.714 [2024-12-06T10:00:28.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.714 [2024-12-06T10:00:28.986Z] =================================================================================================================== 00:23:03.714 [2024-12-06T10:00:28.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85167' 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@973 -- # kill 85167 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@978 -- # wait 85167 00:23:03.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:03.714 10:00:28 keyring_file -- keyring/file.sh@118 -- # bperfpid=85406 00:23:03.714 10:00:28 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85406 /var/tmp/bperf.sock 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85406 ']' 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.714 10:00:28 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.714 10:00:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:03.714 10:00:28 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:23:03.714 "subsystems": [ 00:23:03.714 { 00:23:03.715 "subsystem": "keyring", 00:23:03.715 "config": [ 00:23:03.715 { 00:23:03.715 "method": "keyring_file_add_key", 00:23:03.715 "params": { 00:23:03.715 "name": "key0", 00:23:03.715 "path": "/tmp/tmp.6a79bI44kZ" 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "keyring_file_add_key", 00:23:03.715 "params": { 00:23:03.715 "name": "key1", 00:23:03.715 "path": "/tmp/tmp.ejnEOWMR4Y" 00:23:03.715 } 00:23:03.715 } 00:23:03.715 ] 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "subsystem": "iobuf", 00:23:03.715 "config": [ 00:23:03.715 { 00:23:03.715 "method": "iobuf_set_options", 00:23:03.715 "params": { 00:23:03.715 "small_pool_count": 8192, 00:23:03.715 "large_pool_count": 1024, 00:23:03.715 "small_bufsize": 8192, 00:23:03.715 "large_bufsize": 135168, 00:23:03.715 "enable_numa": false 00:23:03.715 } 00:23:03.715 } 00:23:03.715 ] 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "subsystem": "sock", 00:23:03.715 "config": [ 00:23:03.715 { 00:23:03.715 "method": "sock_set_default_impl", 00:23:03.715 "params": { 00:23:03.715 "impl_name": "uring" 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "sock_impl_set_options", 00:23:03.715 "params": { 00:23:03.715 "impl_name": "ssl", 00:23:03.715 "recv_buf_size": 4096, 00:23:03.715 "send_buf_size": 4096, 00:23:03.715 "enable_recv_pipe": true, 00:23:03.715 "enable_quickack": false, 00:23:03.715 "enable_placement_id": 0, 00:23:03.715 "enable_zerocopy_send_server": true, 00:23:03.715 "enable_zerocopy_send_client": false, 00:23:03.715 "zerocopy_threshold": 0, 00:23:03.715 "tls_version": 0, 00:23:03.715 "enable_ktls": false 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "sock_impl_set_options", 00:23:03.715 "params": { 00:23:03.715 "impl_name": "posix", 00:23:03.715 "recv_buf_size": 2097152, 00:23:03.715 "send_buf_size": 2097152, 00:23:03.715 "enable_recv_pipe": true, 00:23:03.715 "enable_quickack": false, 00:23:03.715 "enable_placement_id": 0, 00:23:03.715 "enable_zerocopy_send_server": true, 00:23:03.715 "enable_zerocopy_send_client": false, 00:23:03.715 "zerocopy_threshold": 0, 00:23:03.715 "tls_version": 0, 00:23:03.715 "enable_ktls": false 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "sock_impl_set_options", 00:23:03.715 "params": { 00:23:03.715 "impl_name": "uring", 00:23:03.715 "recv_buf_size": 2097152, 00:23:03.715 "send_buf_size": 2097152, 00:23:03.715 "enable_recv_pipe": true, 00:23:03.715 "enable_quickack": false, 00:23:03.715 "enable_placement_id": 0, 00:23:03.715 "enable_zerocopy_send_server": false, 00:23:03.715 "enable_zerocopy_send_client": false, 00:23:03.715 "zerocopy_threshold": 0, 00:23:03.715 "tls_version": 0, 00:23:03.715 "enable_ktls": false 00:23:03.715 } 00:23:03.715 } 00:23:03.715 ] 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "subsystem": "vmd", 00:23:03.715 "config": [] 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "subsystem": "accel", 00:23:03.715 "config": [ 00:23:03.715 { 00:23:03.715 "method": "accel_set_options", 00:23:03.715 "params": { 00:23:03.715 "small_cache_size": 128, 00:23:03.715 "large_cache_size": 16, 00:23:03.715 "task_count": 2048, 00:23:03.715 "sequence_count": 2048, 00:23:03.715 "buf_count": 2048 00:23:03.715 } 00:23:03.715 } 00:23:03.715 ] 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "subsystem": "bdev", 00:23:03.715 "config": [ 00:23:03.715 { 00:23:03.715 "method": "bdev_set_options", 00:23:03.715 "params": { 00:23:03.715 "bdev_io_pool_size": 65535, 00:23:03.715 "bdev_io_cache_size": 256, 00:23:03.715 "bdev_auto_examine": true, 00:23:03.715 "iobuf_small_cache_size": 128, 00:23:03.715 "iobuf_large_cache_size": 16 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "bdev_raid_set_options", 00:23:03.715 "params": { 00:23:03.715 "process_window_size_kb": 1024, 00:23:03.715 "process_max_bandwidth_mb_sec": 0 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "bdev_iscsi_set_options", 00:23:03.715 "params": { 00:23:03.715 "timeout_sec": 30 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "bdev_nvme_set_options", 00:23:03.715 "params": { 00:23:03.715 "action_on_timeout": "none", 00:23:03.715 "timeout_us": 0, 00:23:03.715 "timeout_admin_us": 0, 00:23:03.715 "keep_alive_timeout_ms": 10000, 00:23:03.715 "arbitration_burst": 0, 00:23:03.715 "low_priority_weight": 0, 00:23:03.715 "medium_priority_weight": 0, 00:23:03.715 "high_priority_weight": 0, 00:23:03.715 "nvme_adminq_poll_period_us": 10000, 00:23:03.715 "nvme_ioq_poll_period_us": 0, 00:23:03.715 "io_queue_requests": 512, 00:23:03.715 "delay_cmd_submit": true, 00:23:03.715 "transport_retry_count": 4, 00:23:03.715 "bdev_retry_count": 3, 00:23:03.715 "transport_ack_timeout": 0, 00:23:03.715 "ctrlr_loss_timeout_sec": 0, 00:23:03.715 "reconnect_delay_sec": 0, 00:23:03.715 "fast_io_fail_timeout_sec": 0, 00:23:03.715 "disable_auto_failback": false, 00:23:03.715 "generate_uuids": false, 00:23:03.715 "transport_tos": 0, 00:23:03.715 "nvme_error_stat": false, 00:23:03.715 "rdma_srq_size": 0, 00:23:03.715 "io_path_stat": false, 00:23:03.715 "allow_accel_sequence": false, 00:23:03.715 "rdma_max_cq_size": 0, 00:23:03.715 "rdma_cm_event_timeout_ms": 0, 00:23:03.715 "dhchap_digests": [ 00:23:03.715 "sha256", 00:23:03.715 "sha384", 00:23:03.715 "sha512" 00:23:03.715 ], 00:23:03.715 "dhchap_dhgroups": [ 00:23:03.715 "null", 00:23:03.715 "ffdhe2048", 00:23:03.715 "ffdhe3072", 00:23:03.715 "ffdhe4096", 00:23:03.715 "ffdhe6144", 00:23:03.715 "ffdhe8192" 00:23:03.715 ] 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "bdev_nvme_attach_controller", 00:23:03.715 "params": { 00:23:03.715 "name": "nvme0", 00:23:03.715 "trtype": "TCP", 00:23:03.715 "adrfam": "IPv4", 00:23:03.715 "traddr": "127.0.0.1", 00:23:03.715 "trsvcid": "4420", 00:23:03.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.715 "prchk_reftag": false, 00:23:03.715 "prchk_guard": false, 00:23:03.715 "ctrlr_loss_timeout_sec": 0, 00:23:03.715 "reconnect_delay_sec": 0, 00:23:03.715 "fast_io_fail_timeout_sec": 0, 00:23:03.715 "psk": "key0", 00:23:03.715 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:03.715 "hdgst": false, 00:23:03.715 "ddgst": false, 00:23:03.715 "multipath": "multipath" 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "bdev_nvme_set_hotplug", 00:23:03.715 "params": { 00:23:03.715 "period_us": 100000, 00:23:03.715 "enable": false 00:23:03.715 } 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "method": "bdev_wait_for_examine" 00:23:03.715 } 00:23:03.715 ] 00:23:03.715 }, 00:23:03.715 { 00:23:03.715 "subsystem": "nbd", 00:23:03.715 "config": [] 00:23:03.715 } 00:23:03.715 ] 00:23:03.715 }' 00:23:03.715 [2024-12-06 10:00:28.968153] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:23:03.715 [2024-12-06 10:00:28.968243] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85406 ] 00:23:03.975 [2024-12-06 10:00:29.109436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.975 [2024-12-06 10:00:29.152712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.235 [2024-12-06 10:00:29.286366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:04.235 [2024-12-06 10:00:29.344429] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.804 10:00:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.804 10:00:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:04.804 10:00:29 keyring_file -- keyring/file.sh@121 -- # jq length 00:23:04.804 10:00:29 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:23:04.804 10:00:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:05.063 10:00:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:05.063 10:00:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:23:05.063 10:00:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:05.063 10:00:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:05.063 10:00:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:05.063 10:00:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:05.063 10:00:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:05.323 10:00:30 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:23:05.323 10:00:30 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:23:05.323 10:00:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:05.323 10:00:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:05.323 10:00:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:05.323 10:00:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:05.323 10:00:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:05.582 10:00:30 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:23:05.582 10:00:30 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:23:05.582 10:00:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:05.582 10:00:30 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:23:05.842 10:00:31 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:23:05.842 10:00:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:05.842 10:00:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.6a79bI44kZ /tmp/tmp.ejnEOWMR4Y 00:23:05.842 10:00:31 keyring_file -- keyring/file.sh@20 -- # killprocess 85406 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85406 ']' 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85406 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85406 00:23:05.842 killing process with pid 85406 00:23:05.842 Received shutdown signal, test time was about 1.000000 seconds 00:23:05.842 00:23:05.842 Latency(us) 00:23:05.842 [2024-12-06T10:00:31.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.842 [2024-12-06T10:00:31.114Z] =================================================================================================================== 00:23:05.842 [2024-12-06T10:00:31.114Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85406' 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@973 -- # kill 85406 00:23:05.842 10:00:31 keyring_file -- common/autotest_common.sh@978 -- # wait 85406 00:23:06.102 10:00:31 keyring_file -- keyring/file.sh@21 -- # killprocess 85157 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85157 ']' 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85157 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85157 00:23:06.102 killing process with pid 85157 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85157' 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@973 -- # kill 85157 00:23:06.102 10:00:31 keyring_file -- common/autotest_common.sh@978 -- # wait 85157 00:23:06.361 00:23:06.361 real 0m14.781s 00:23:06.361 user 0m37.088s 00:23:06.361 sys 0m3.126s 00:23:06.361 10:00:31 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.361 ************************************ 00:23:06.361 END TEST keyring_file 00:23:06.361 ************************************ 00:23:06.361 10:00:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:06.621 10:00:31 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:23:06.621 10:00:31 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:06.621 10:00:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:06.621 10:00:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.621 10:00:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.621 ************************************ 00:23:06.621 START TEST keyring_linux 00:23:06.621 ************************************ 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:06.621 Joined session keyring: 824393440 00:23:06.621 * Looking for test storage... 00:23:06.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@345 -- # : 1 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.621 10:00:31 keyring_linux -- scripts/common.sh@368 -- # return 0 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:06.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.621 --rc genhtml_branch_coverage=1 00:23:06.621 --rc genhtml_function_coverage=1 00:23:06.621 --rc genhtml_legend=1 00:23:06.621 --rc geninfo_all_blocks=1 00:23:06.621 --rc geninfo_unexecuted_blocks=1 00:23:06.621 00:23:06.621 ' 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:06.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.621 --rc genhtml_branch_coverage=1 00:23:06.621 --rc genhtml_function_coverage=1 00:23:06.621 --rc genhtml_legend=1 00:23:06.621 --rc geninfo_all_blocks=1 00:23:06.621 --rc geninfo_unexecuted_blocks=1 00:23:06.621 00:23:06.621 ' 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:06.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.621 --rc genhtml_branch_coverage=1 00:23:06.621 --rc genhtml_function_coverage=1 00:23:06.621 --rc genhtml_legend=1 00:23:06.621 --rc geninfo_all_blocks=1 00:23:06.621 --rc geninfo_unexecuted_blocks=1 00:23:06.621 00:23:06.621 ' 00:23:06.621 10:00:31 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:06.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.621 --rc genhtml_branch_coverage=1 00:23:06.621 --rc genhtml_function_coverage=1 00:23:06.621 --rc genhtml_legend=1 00:23:06.621 --rc geninfo_all_blocks=1 00:23:06.621 --rc geninfo_unexecuted_blocks=1 00:23:06.621 00:23:06.621 ' 00:23:06.621 10:00:31 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:06.621 10:00:31 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:06.621 10:00:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8a753b29-bc84-4c8c-8ae2-d2e41bd915e7 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:06.622 10:00:31 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.622 10:00:31 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.622 10:00:31 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.622 10:00:31 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.622 10:00:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.622 10:00:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.622 10:00:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.622 10:00:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:06.622 10:00:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.622 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.622 10:00:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:06.622 10:00:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:06.622 10:00:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:06.622 10:00:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:06.622 10:00:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:06.622 10:00:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:06.622 10:00:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:06.622 10:00:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:06.622 10:00:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:06.622 10:00:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:06.622 10:00:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:06.622 10:00:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:06.622 10:00:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:06.622 10:00:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:06.881 /tmp/:spdk-test:key0 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:06.881 10:00:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:06.881 10:00:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:06.881 10:00:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.881 10:00:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:06.881 10:00:31 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:06.881 10:00:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:23:06.881 10:00:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:06.881 /tmp/:spdk-test:key1 00:23:06.881 10:00:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:06.881 10:00:31 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85533 00:23:06.881 10:00:31 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.881 10:00:31 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85533 00:23:06.881 10:00:31 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85533 ']' 00:23:06.881 10:00:31 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.881 10:00:31 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.881 10:00:31 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.881 10:00:31 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.881 10:00:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:06.881 [2024-12-06 10:00:32.047626] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:23:06.881 [2024-12-06 10:00:32.047722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85533 ] 00:23:07.140 [2024-12-06 10:00:32.192187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.140 [2024-12-06 10:00:32.234515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.140 [2024-12-06 10:00:32.300363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:07.400 10:00:32 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.400 10:00:32 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:23:07.400 10:00:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:07.400 10:00:32 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.400 10:00:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:07.400 [2024-12-06 10:00:32.484492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.400 null0 00:23:07.400 [2024-12-06 10:00:32.516476] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.400 [2024-12-06 10:00:32.516685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:07.400 10:00:32 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.400 10:00:32 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:07.400 911182818 00:23:07.400 10:00:32 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:07.400 638495870 00:23:07.400 10:00:32 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85545 00:23:07.400 10:00:32 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:07.400 10:00:32 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85545 /var/tmp/bperf.sock 00:23:07.401 10:00:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85545 ']' 00:23:07.401 10:00:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:07.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:07.401 10:00:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.401 10:00:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:07.401 10:00:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.401 10:00:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:07.401 [2024-12-06 10:00:32.600386] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:23:07.401 [2024-12-06 10:00:32.600490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85545 ] 00:23:07.660 [2024-12-06 10:00:32.749546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.660 [2024-12-06 10:00:32.801053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.660 10:00:32 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.660 10:00:32 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:23:07.660 10:00:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:07.660 10:00:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:07.919 10:00:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:07.919 10:00:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:08.179 [2024-12-06 10:00:33.308553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:08.179 10:00:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:08.179 10:00:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:08.439 [2024-12-06 10:00:33.554185] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.439 nvme0n1 00:23:08.439 10:00:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:08.439 10:00:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:08.439 10:00:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:08.439 10:00:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:08.439 10:00:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:08.439 10:00:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.699 10:00:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:08.699 10:00:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:08.699 10:00:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:08.699 10:00:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:08.699 10:00:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.699 10:00:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:08.699 10:00:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.959 10:00:34 keyring_linux -- keyring/linux.sh@25 -- # sn=911182818 00:23:08.959 10:00:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:08.959 10:00:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:08.959 10:00:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 911182818 == \9\1\1\1\8\2\8\1\8 ]] 00:23:08.959 10:00:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 911182818 00:23:08.959 10:00:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:08.959 10:00:34 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:09.218 Running I/O for 1 seconds... 00:23:10.154 13820.00 IOPS, 53.98 MiB/s 00:23:10.154 Latency(us) 00:23:10.154 [2024-12-06T10:00:35.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.154 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:10.154 nvme0n1 : 1.01 13821.06 53.99 0.00 0.00 9213.69 4557.73 13107.20 00:23:10.154 [2024-12-06T10:00:35.426Z] =================================================================================================================== 00:23:10.154 [2024-12-06T10:00:35.426Z] Total : 13821.06 53.99 0.00 0.00 9213.69 4557.73 13107.20 00:23:10.154 { 00:23:10.154 "results": [ 00:23:10.154 { 00:23:10.154 "job": "nvme0n1", 00:23:10.154 "core_mask": "0x2", 00:23:10.154 "workload": "randread", 00:23:10.154 "status": "finished", 00:23:10.154 "queue_depth": 128, 00:23:10.154 "io_size": 4096, 00:23:10.154 "runtime": 1.009257, 00:23:10.154 "iops": 13821.058461818942, 00:23:10.154 "mibps": 53.98850961648024, 00:23:10.154 "io_failed": 0, 00:23:10.154 "io_timeout": 0, 00:23:10.154 "avg_latency_us": 9213.692781626574, 00:23:10.155 "min_latency_us": 4557.730909090909, 00:23:10.155 "max_latency_us": 13107.2 00:23:10.155 } 00:23:10.155 ], 00:23:10.155 "core_count": 1 00:23:10.155 } 00:23:10.155 10:00:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:10.155 10:00:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:10.412 10:00:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:10.412 10:00:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:10.412 10:00:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:10.412 10:00:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:10.412 10:00:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:10.412 10:00:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:10.668 10:00:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:10.668 10:00:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:10.668 10:00:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:10.668 10:00:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:10.668 10:00:35 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:23:10.668 10:00:35 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:10.668 10:00:35 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:10.668 10:00:35 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.668 10:00:35 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:10.668 10:00:35 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.668 10:00:35 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:10.668 10:00:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:10.927 [2024-12-06 10:00:36.089866] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.927 [2024-12-06 10:00:36.090487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c55d0 (107): Transport endpoint is not connected 00:23:10.927 [2024-12-06 10:00:36.091473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c55d0 (9): Bad file descriptor 00:23:10.927 [2024-12-06 10:00:36.092470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:10.927 [2024-12-06 10:00:36.092497] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:10.927 [2024-12-06 10:00:36.092508] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:10.927 [2024-12-06 10:00:36.092520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:10.927 request: 00:23:10.927 { 00:23:10.927 "name": "nvme0", 00:23:10.927 "trtype": "tcp", 00:23:10.927 "traddr": "127.0.0.1", 00:23:10.927 "adrfam": "ipv4", 00:23:10.927 "trsvcid": "4420", 00:23:10.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:10.927 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:10.927 "prchk_reftag": false, 00:23:10.927 "prchk_guard": false, 00:23:10.927 "hdgst": false, 00:23:10.927 "ddgst": false, 00:23:10.927 "psk": ":spdk-test:key1", 00:23:10.927 "allow_unrecognized_csi": false, 00:23:10.927 "method": "bdev_nvme_attach_controller", 00:23:10.927 "req_id": 1 00:23:10.927 } 00:23:10.927 Got JSON-RPC error response 00:23:10.927 response: 00:23:10.927 { 00:23:10.927 "code": -5, 00:23:10.927 "message": "Input/output error" 00:23:10.927 } 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@33 -- # sn=911182818 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 911182818 00:23:10.927 1 links removed 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@33 -- # sn=638495870 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 638495870 00:23:10.927 1 links removed 00:23:10.927 10:00:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85545 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85545 ']' 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85545 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85545 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.927 killing process with pid 85545 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85545' 00:23:10.927 Received shutdown signal, test time was about 1.000000 seconds 00:23:10.927 00:23:10.927 Latency(us) 00:23:10.927 [2024-12-06T10:00:36.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.927 [2024-12-06T10:00:36.199Z] =================================================================================================================== 00:23:10.927 [2024-12-06T10:00:36.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 85545 00:23:10.927 10:00:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 85545 00:23:11.185 10:00:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85533 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85533 ']' 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85533 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85533 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85533' 00:23:11.185 killing process with pid 85533 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 85533 00:23:11.185 10:00:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 85533 00:23:11.751 00:23:11.751 real 0m5.140s 00:23:11.751 user 0m9.907s 00:23:11.751 sys 0m1.525s 00:23:11.751 10:00:36 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.751 ************************************ 00:23:11.751 END TEST keyring_linux 00:23:11.751 ************************************ 00:23:11.751 10:00:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:11.751 10:00:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:11.751 10:00:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:11.751 10:00:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:11.751 10:00:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:11.751 10:00:36 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:11.751 10:00:36 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:11.751 10:00:36 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:11.751 10:00:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.751 10:00:36 -- common/autotest_common.sh@10 -- # set +x 00:23:11.751 10:00:36 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:11.751 10:00:36 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:11.751 10:00:36 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:11.751 10:00:36 -- common/autotest_common.sh@10 -- # set +x 00:23:13.648 INFO: APP EXITING 00:23:13.648 INFO: killing all VMs 00:23:13.648 INFO: killing vhost app 00:23:13.648 INFO: EXIT DONE 00:23:14.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:14.213 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:14.213 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:14.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:14.782 Cleaning 00:23:14.782 Removing: /var/run/dpdk/spdk0/config 00:23:14.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:14.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:14.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:14.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:14.782 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:14.782 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:14.782 Removing: /var/run/dpdk/spdk1/config 00:23:14.782 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:14.782 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:14.782 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:14.782 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:14.782 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:14.782 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:14.782 Removing: /var/run/dpdk/spdk2/config 00:23:14.782 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:14.782 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:14.782 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:14.782 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:15.041 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:15.041 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:15.041 Removing: /var/run/dpdk/spdk3/config 00:23:15.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:15.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:15.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:15.041 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:15.041 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:15.041 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:15.041 Removing: /var/run/dpdk/spdk4/config 00:23:15.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:15.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:15.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:15.041 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:15.041 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:15.041 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:15.041 Removing: /dev/shm/nvmf_trace.0 00:23:15.041 Removing: /dev/shm/spdk_tgt_trace.pid56672 00:23:15.041 Removing: /var/run/dpdk/spdk0 00:23:15.041 Removing: /var/run/dpdk/spdk1 00:23:15.041 Removing: /var/run/dpdk/spdk2 00:23:15.041 Removing: /var/run/dpdk/spdk3 00:23:15.041 Removing: /var/run/dpdk/spdk4 00:23:15.041 Removing: /var/run/dpdk/spdk_pid56519 00:23:15.041 Removing: /var/run/dpdk/spdk_pid56672 00:23:15.041 Removing: /var/run/dpdk/spdk_pid56876 00:23:15.041 Removing: /var/run/dpdk/spdk_pid56957 00:23:15.041 Removing: /var/run/dpdk/spdk_pid56990 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57094 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57112 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57252 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57448 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57602 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57675 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57751 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57843 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57920 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57959 00:23:15.041 Removing: /var/run/dpdk/spdk_pid57989 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58058 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58145 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58589 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58628 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58677 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58680 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58753 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58761 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58828 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58844 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58890 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58900 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58946 00:23:15.041 Removing: /var/run/dpdk/spdk_pid58964 00:23:15.041 Removing: /var/run/dpdk/spdk_pid59100 00:23:15.041 Removing: /var/run/dpdk/spdk_pid59135 00:23:15.041 Removing: /var/run/dpdk/spdk_pid59218 00:23:15.041 Removing: /var/run/dpdk/spdk_pid59552 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59568 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59600 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59614 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59635 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59654 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59667 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59683 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59702 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59715 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59731 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59750 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59769 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59784 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59803 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59817 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59832 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59857 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59865 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59886 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59915 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59930 00:23:15.042 Removing: /var/run/dpdk/spdk_pid59965 00:23:15.042 Removing: /var/run/dpdk/spdk_pid60031 00:23:15.042 Removing: /var/run/dpdk/spdk_pid60060 00:23:15.042 Removing: /var/run/dpdk/spdk_pid60069 00:23:15.042 Removing: /var/run/dpdk/spdk_pid60098 00:23:15.042 Removing: /var/run/dpdk/spdk_pid60113 00:23:15.042 Removing: /var/run/dpdk/spdk_pid60115 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60163 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60171 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60205 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60209 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60224 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60230 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60245 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60249 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60264 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60268 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60302 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60329 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60338 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60371 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60376 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60384 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60424 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60441 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60462 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60476 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60479 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60492 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60500 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60507 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60515 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60522 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60604 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60652 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60764 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60803 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60848 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60863 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60885 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60900 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60937 00:23:15.300 Removing: /var/run/dpdk/spdk_pid60953 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61032 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61054 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61104 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61191 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61247 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61276 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61376 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61419 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61457 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61684 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61781 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61810 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61839 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61873 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61906 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61940 00:23:15.300 Removing: /var/run/dpdk/spdk_pid61971 00:23:15.300 Removing: /var/run/dpdk/spdk_pid62380 00:23:15.300 Removing: /var/run/dpdk/spdk_pid62418 00:23:15.300 Removing: /var/run/dpdk/spdk_pid62760 00:23:15.300 Removing: /var/run/dpdk/spdk_pid63226 00:23:15.300 Removing: /var/run/dpdk/spdk_pid63509 00:23:15.300 Removing: /var/run/dpdk/spdk_pid64348 00:23:15.300 Removing: /var/run/dpdk/spdk_pid65281 00:23:15.300 Removing: /var/run/dpdk/spdk_pid65397 00:23:15.300 Removing: /var/run/dpdk/spdk_pid65466 00:23:15.300 Removing: /var/run/dpdk/spdk_pid66881 00:23:15.300 Removing: /var/run/dpdk/spdk_pid67190 00:23:15.300 Removing: /var/run/dpdk/spdk_pid70868 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71218 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71327 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71464 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71498 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71527 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71548 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71631 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71755 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71917 00:23:15.300 Removing: /var/run/dpdk/spdk_pid71991 00:23:15.300 Removing: /var/run/dpdk/spdk_pid72186 00:23:15.300 Removing: /var/run/dpdk/spdk_pid72250 00:23:15.300 Removing: /var/run/dpdk/spdk_pid72348 00:23:15.300 Removing: /var/run/dpdk/spdk_pid72707 00:23:15.300 Removing: /var/run/dpdk/spdk_pid73112 00:23:15.300 Removing: /var/run/dpdk/spdk_pid73113 00:23:15.300 Removing: /var/run/dpdk/spdk_pid73114 00:23:15.300 Removing: /var/run/dpdk/spdk_pid73369 00:23:15.300 Removing: /var/run/dpdk/spdk_pid73631 00:23:15.300 Removing: /var/run/dpdk/spdk_pid74023 00:23:15.300 Removing: /var/run/dpdk/spdk_pid74025 00:23:15.559 Removing: /var/run/dpdk/spdk_pid74352 00:23:15.559 Removing: /var/run/dpdk/spdk_pid74366 00:23:15.559 Removing: /var/run/dpdk/spdk_pid74391 00:23:15.559 Removing: /var/run/dpdk/spdk_pid74416 00:23:15.559 Removing: /var/run/dpdk/spdk_pid74421 00:23:15.559 Removing: /var/run/dpdk/spdk_pid74776 00:23:15.559 Removing: /var/run/dpdk/spdk_pid74819 00:23:15.559 Removing: /var/run/dpdk/spdk_pid75147 00:23:15.559 Removing: /var/run/dpdk/spdk_pid75345 00:23:15.559 Removing: /var/run/dpdk/spdk_pid75786 00:23:15.559 Removing: /var/run/dpdk/spdk_pid76346 00:23:15.559 Removing: /var/run/dpdk/spdk_pid77235 00:23:15.559 Removing: /var/run/dpdk/spdk_pid77869 00:23:15.559 Removing: /var/run/dpdk/spdk_pid77872 00:23:15.559 Removing: /var/run/dpdk/spdk_pid79887 00:23:15.559 Removing: /var/run/dpdk/spdk_pid79934 00:23:15.559 Removing: /var/run/dpdk/spdk_pid80000 00:23:15.559 Removing: /var/run/dpdk/spdk_pid80054 00:23:15.559 Removing: /var/run/dpdk/spdk_pid80172 00:23:15.559 Removing: /var/run/dpdk/spdk_pid80232 00:23:15.559 Removing: /var/run/dpdk/spdk_pid80285 00:23:15.559 Removing: /var/run/dpdk/spdk_pid80345 00:23:15.559 Removing: /var/run/dpdk/spdk_pid80708 00:23:15.559 Removing: /var/run/dpdk/spdk_pid81917 00:23:15.559 Removing: /var/run/dpdk/spdk_pid82051 00:23:15.559 Removing: /var/run/dpdk/spdk_pid82299 00:23:15.559 Removing: /var/run/dpdk/spdk_pid82895 00:23:15.559 Removing: /var/run/dpdk/spdk_pid83056 00:23:15.559 Removing: /var/run/dpdk/spdk_pid83213 00:23:15.559 Removing: /var/run/dpdk/spdk_pid83310 00:23:15.559 Removing: /var/run/dpdk/spdk_pid83476 00:23:15.559 Removing: /var/run/dpdk/spdk_pid83585 00:23:15.559 Removing: /var/run/dpdk/spdk_pid84290 00:23:15.559 Removing: /var/run/dpdk/spdk_pid84320 00:23:15.559 Removing: /var/run/dpdk/spdk_pid84361 00:23:15.559 Removing: /var/run/dpdk/spdk_pid84616 00:23:15.559 Removing: /var/run/dpdk/spdk_pid84651 00:23:15.559 Removing: /var/run/dpdk/spdk_pid84681 00:23:15.559 Removing: /var/run/dpdk/spdk_pid85157 00:23:15.559 Removing: /var/run/dpdk/spdk_pid85167 00:23:15.559 Removing: /var/run/dpdk/spdk_pid85406 00:23:15.559 Removing: /var/run/dpdk/spdk_pid85533 00:23:15.559 Removing: /var/run/dpdk/spdk_pid85545 00:23:15.559 Clean 00:23:15.559 10:00:40 -- common/autotest_common.sh@1453 -- # return 0 00:23:15.559 10:00:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:15.559 10:00:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.559 10:00:40 -- common/autotest_common.sh@10 -- # set +x 00:23:15.559 10:00:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:15.559 10:00:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.559 10:00:40 -- common/autotest_common.sh@10 -- # set +x 00:23:15.818 10:00:40 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:15.818 10:00:40 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:15.818 10:00:40 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:15.818 10:00:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:15.818 10:00:40 -- spdk/autotest.sh@398 -- # hostname 00:23:15.818 10:00:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:15.818 geninfo: WARNING: invalid characters removed from testname! 00:23:37.743 10:01:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:41.024 10:01:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:43.554 10:01:08 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:46.090 10:01:10 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:48.629 10:01:13 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:51.160 10:01:15 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:53.694 10:01:18 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:53.694 10:01:18 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:53.694 10:01:18 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:53.694 10:01:18 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:53.694 10:01:18 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:53.694 10:01:18 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:53.694 + [[ -n 5209 ]] 00:23:53.694 + sudo kill 5209 00:23:53.704 [Pipeline] } 00:23:53.720 [Pipeline] // timeout 00:23:53.725 [Pipeline] } 00:23:53.740 [Pipeline] // stage 00:23:53.746 [Pipeline] } 00:23:53.761 [Pipeline] // catchError 00:23:53.770 [Pipeline] stage 00:23:53.773 [Pipeline] { (Stop VM) 00:23:53.786 [Pipeline] sh 00:23:54.068 + vagrant halt 00:23:57.356 ==> default: Halting domain... 00:24:02.641 [Pipeline] sh 00:24:02.922 + vagrant destroy -f 00:24:06.314 ==> default: Removing domain... 00:24:06.326 [Pipeline] sh 00:24:06.607 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:24:06.616 [Pipeline] } 00:24:06.632 [Pipeline] // stage 00:24:06.638 [Pipeline] } 00:24:06.654 [Pipeline] // dir 00:24:06.659 [Pipeline] } 00:24:06.674 [Pipeline] // wrap 00:24:06.681 [Pipeline] } 00:24:06.694 [Pipeline] // catchError 00:24:06.726 [Pipeline] stage 00:24:06.728 [Pipeline] { (Epilogue) 00:24:06.742 [Pipeline] sh 00:24:07.029 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:12.304 [Pipeline] catchError 00:24:12.305 [Pipeline] { 00:24:12.312 [Pipeline] sh 00:24:12.593 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:12.593 Artifacts sizes are good 00:24:12.604 [Pipeline] } 00:24:12.615 [Pipeline] // catchError 00:24:12.624 [Pipeline] archiveArtifacts 00:24:12.630 Archiving artifacts 00:24:12.756 [Pipeline] cleanWs 00:24:12.769 [WS-CLEANUP] Deleting project workspace... 00:24:12.769 [WS-CLEANUP] Deferred wipeout is used... 00:24:12.799 [WS-CLEANUP] done 00:24:12.801 [Pipeline] } 00:24:12.815 [Pipeline] // stage 00:24:12.819 [Pipeline] } 00:24:12.831 [Pipeline] // node 00:24:12.836 [Pipeline] End of Pipeline 00:24:12.868 Finished: SUCCESS